CN104461317A - Method and system for playing multimedia content by polyhedral object - Google Patents

Method and system for playing multimedia content by polyhedral object Download PDF

Info

Publication number
CN104461317A
CN104461317A CN201310571807.6A CN201310571807A CN104461317A CN 104461317 A CN104461317 A CN 104461317A CN 201310571807 A CN201310571807 A CN 201310571807A CN 104461317 A CN104461317 A CN 104461317A
Authority
CN
China
Prior art keywords
user
module
polyhedron object
multimedia
monitoring
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310571807.6A
Other languages
Chinese (zh)
Inventor
邹嘉骏
陈羿帆
林玠佑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Utechzone Co Ltd
Original Assignee
Utechzone Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Utechzone Co Ltd filed Critical Utechzone Co Ltd
Publication of CN104461317A publication Critical patent/CN104461317A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides a method and a system for playing multimedia contents by using polyhedral objects. The corresponding action is executed by analyzing the image sequence acquired by the sensing unit. When the presence of the user is detected within the trigger range, the polyhedral object is displayed on the display unit. And when the situation that the user is located in the operation range in the trigger range is monitored, starting the interaction mode of the polyhedral object, and respectively assigning the multimedia content to each playing surface of the polyhedral object. In the interactive mode, the limb movement of the user within the operation range is monitored. And executing corresponding operation on the polyhedral object according to the limb action.

Description

The method and system of play multimedia content are carried out with polyhedron object
Technical field
The invention relates to a kind of method and system controlling displaying contents, and relate to a kind of method and system carrying out play multimedia content with polyhedron object of content of multimedia especially.
Background technology
Along with the development of science and technology, hotel owner or manufacturer can transmit advertising message by electronic advertising player, publicize new product thus, or improve the popularity of StoreFront and product.And generally traditional advertising marketing pattern is mostly presetted several advertisements and playing sequence thereof in electronic advertising player, makes electronic advertising player repeatedly play according to specific playing sequence.But traditional broadcasting method is passive type, cannot expect whether pedestrian can stand for a long while to come to watch at electronic advertising player enough, and also cannot be interactive to some extent with user.
Summary of the invention
The invention provides a kind of method carrying out play multimedia content with polyhedron object, make the content shown by display unit can produce interaction with user, and then increase use enjoyment.Further, by the presentation mode in multiple broadcasting faces of polyhedron object, user can be allowed to watch multiple content of multimedia simultaneously, and then improve the effect of Information Communication.
The method of play multimedia content of the present invention, comprises by analyzing the image sequence that obtains of sensing unit, and performs corresponding action, comprising: when monitor in trigger range there is user time, at display unit display polyhedron object; When monitoring user and being arranged in the opereating specification of trigger range, start the interactive model of polyhedron object, and assign multiple content of multimedia respectively in multiple broadcasting faces of polyhedron object; Under interactive model, the limb action that monitoring user produces in opereating specification; And according to limb action, corresponding operation is performed to polyhedron object.
In one embodiment of this invention, when monitor in trigger range there is user time, after display unit display polyhedron object, start the follow the mode of polyhedron object.Under follow the mode, the mobile message of monitoring user, makes polyhedron object perform corresponding movement according to mobile message.
In one embodiment of this invention, after the interactive model starting polyhedron object, when monitoring user and leaving opereating specification, close the interactive model of polyhedron object.
In one embodiment of this invention, when monitoring user and being arranged in the opereating specification of trigger range, whether can be intended for display unit by monitoring user further.When monitoring user and being intended for display unit, start the interactive model of polyhedron object.Further, when monitoring user and being intended for display unit, analyze the body region of user, face area, hand region and foot area at least one of them characteristic information, obtain analysis result thus.According to analysis result, determine the content of multimedia that each broadcasting face is assigned.
In one embodiment of this invention, above-mentioned each broadcasting face can show corresponding operation interface according to assigned content of multimedia.And according to limb action, polyhedron object is performed in the step of corresponding operation, can judge that limb action is for belonging to first kind action or Equations of The Second Kind action further.When limb action belongs to first kind action, corresponding operation is performed to whole polyhedron object.When limb action belongs to Equations of The Second Kind action, corresponding operation is performed to the operation interface that one of them plays face.Such as, above-mentioned first kind action is rotate gesture, towing gesture or convergent-divergent gesture, and Equations of The Second Kind action for clicking gesture performed by operation interface.
In one embodiment of this invention, also according to the respective operations performed by operation interface, the operating frequency of each content of multimedia can be calculated in said method.Further, with the highest classification wherein belonging to a content of multimedia of operating frequency for foundation, from then in classification, select other multiple content of multimedia, change above-mentioned each broadcasting face thus for other content of multimedia.
In one embodiment of this invention, show polyhedron object in display unit after, after monitoring user and leaving trigger range, remove the display of polyhedron object.
In one embodiment of this invention, monitor when there is user in trigger range, can according to the primary importance at the current place of user, second place display polyhedron object corresponding in display unit.
The system of play multimedia content of the present invention, comprising: display unit, sensing unit, front end arithmetic element and rear end arithmetic element.Sensing unit is in order to obtain image sequence.Front end arithmetic element is coupled to sensing unit, receives image sequence.Front end arithmetic element performs image-processing operations by image processing module to image sequence, and whether and the limb action of user the existence of monitoring user thus, and produce monitoring result according to this.Rear end arithmetic element is coupled to display unit and front end arithmetic element, receives monitoring result.Rear end arithmetic element carries out interactive operation by operational processes module according to monitoring result.When image processing module monitor in trigger range there is user time, operational processes module is at display unit display polyhedron object.Further, when image processing module monitor user be arranged in the opereating specification of trigger range time, operational processes module starts the interactive model of polyhedron object, and assigns multiple content of multimedia respectively in multiple broadcasting faces of polyhedron object.And under interactive model, the limb action that image processing module monitoring user produces in opereating specification, makes operational processes module according to limb action, corresponding operation is performed to polyhedron object.
In one embodiment of this invention, above-mentioned image processing module comprises: portrait monitoring modular, and whether its monitoring exists user in trigger range or opereating specification; And behavior monitoring module, the limb action of its monitoring user.Aforesaid operations processing module comprises: object control module, and it produces polyhedron object, and shows polyhedron object at display unit; Mode switch module, it starts or closes the interactive model of polyhedron object; Multimedia assignment module, it assigns different content of multimedia respectively in the above-mentioned broadcasting face of polyhedron object; And execution module, it performs corresponding operation according to limb action to polyhedron object.Portrait monitoring modular monitor in trigger range there is user time, the follow the mode of polyhedron object is started by mode switch module, with under follow the mode, removed the mobile message of monitoring user by behavior monitoring module, and make execution module carry out corresponding mobile polyhedron object according to mobile message.
In one embodiment of this invention, when portrait monitoring module monitors leaves opereating specification to user, closed the interactive model of polyhedron object by mode switch module.
In one embodiment of this invention, above-mentioned portrait monitoring modular also comprises monitoring user and whether is intended for display unit, and when monitoring user and being intended for display unit, is started the interactive model of polyhedron object by mode switch module.
In one embodiment of this invention, above-mentioned image processing module also comprises portrait analysis module, and it analyzes the body region of user, face area, hand region and foot area at least one of them characteristic information, obtains analysis result thus.And multimedia assignment module is according to above-mentioned analysis result, determine the content of multimedia that each broadcasting face is assigned.
In one embodiment of this invention, above-mentioned object control module can according to respective the assigned content of multimedia in each broadcasting face and in each broadcasting face operation display interface.Further, behavior monitoring module judges that limb action is as belonging to first kind action or Equations of The Second Kind action.When limb action belongs to first kind action, execution module performs corresponding operation to whole polyhedron object.When limb action belongs to Equations of The Second Kind action, execution module performs corresponding operation to the operation interface that one of them plays face.Above-mentioned first kind action is rotate gesture, towing gesture or convergent-divergent gesture, and Equations of The Second Kind action for clicking gesture performed by operation interface.
In one embodiment of this invention, aforesaid operations processing module also comprises: statistical module, and it, according to the respective operations performed by operation interface, calculates the operating frequency of each content of multimedia.And above-mentioned multimedia assignment module with the highest classification wherein belonging to a content of multimedia of operating frequency for foundation, from then on select other multiple content of multimedia in classification, changing broadcasting face is thus other content of multimedia.
In one embodiment of this invention, when portrait monitoring module monitors is to after user leaves trigger range, the display of polyhedron object is removed by object control module.
In one embodiment of this invention, above-mentioned object control module is according to the primary importance at the current place of user, and the second place corresponding in display unit shows polyhedron object.
In one embodiment of this invention, above-mentioned sensing unit comprises: multiple main taking unit, is configured in the below of display unit respectively; And multiple auxiliary taking unit, the top of display unit is configured in respectively corresponding to above-mentioned main taking unit.
Based on above-mentioned, user near display unit time, in polyhedron object shown in display unit, show different content of multimedia respectively in its multiple broadcasting face, and perform corresponding operation according to the limb action of user.Accordingly, the enjoyment in use can be increased, improve user and to stop the wish before display unit.
For above-mentioned feature and advantage of the present invention can be become apparent, special embodiment below, and coordinate accompanying drawing to be described in detail below.
Accompanying drawing explanation
Fig. 1 is the system block diagrams of the play multimedia content according to one embodiment of the invention;
Fig. 2 is the schematic diagram of the sensing unit allocation position according to one embodiment of the invention;
Fig. 3 is the method flow diagram carrying out play multimedia content with polyhedron object according to one embodiment of the invention;
Fig. 4 is the schematic diagram of the range set whether existed in order to monitoring user according to one embodiment of the invention;
Fig. 5 is the schematic diagram of the polyhedron object according to one embodiment of the invention;
Fig. 6 is the calcspar of the image processing module according to one embodiment of the invention;
Fig. 7 is the calcspar of the operational processes module according to one embodiment of the invention;
Fig. 8 A and Fig. 8 B is the method flow diagram of another play multimedia content according to one embodiment of the invention;
Fig. 9 is the schematic diagram of the many people's operations according to one embodiment of the invention.
Description of reference numerals:
100: the system of play multimedia content;
110: display unit;
111: display surface;
120: sensing unit;
130: front end arithmetic element;
131: the first processing units;
132: the first storage elements;
133: image processing module;
140: rear end arithmetic element;
141: the second processing units;
142: the second storage elements;
143: operational processes module;
201: main taking unit;
202: auxiliary taking unit;
400: trigger range;
410: opereating specification;
500, O1, O2, O3: polyhedron object;
501 ~ 506: play face;
601: portrait monitoring modular;
602: behavior monitoring module;
603: portrait analysis module;
701: object control module;
702: mode switch module;
703: multimedia assignment module;
704: execution module;
705: statistical module;
D1 ~ D3: distance;
G: ground;
H: highly;
U1, U2, U3: user;
S305 ~ S335: each step of method carrying out play multimedia content with polyhedron object;
S801 ~ S818: each step of method of another play multimedia content.
Embodiment
The broadcasting of generally traditional e-advertising, is repeatedly play according to specific playing sequence mostly, and pedestrian cannot be attracted to stop to watch.For this reason, the present invention proposes a kind of method and system carrying out play multimedia content with polyhedron object, makes the content shown by display unit can produce interaction with user, can attract the notice of user thus, and then increases the exposure rate of play content.In order to make the content of the present invention more clear, below especially exemplified by the example that embodiment can be implemented really according to this as the present invention.
Fig. 1 is the system block diagrams of the play multimedia content according to one embodiment of the invention.Please refer to Fig. 1, the system 100 of play multimedia content comprises display unit 110, sensing unit 120, front end arithmetic element 130 and rear end arithmetic element 140.
Display unit 110 is such as liquid crystal display (Liquid-Crystal Display, LCD), plasma display panel, vacuum fluorescent display, light-emittingdiode (Light-Emitting Diode, LED) display, Field Emission Display (Field Emission Display, FED) and/or the display of other suitable species, its kind is not limited at this.Display unit 110 is such as single screen, also can be combined by multiple screen, not limit at this.And the length of display unit 110 is such as 6 meters, be highly such as 1.2 meters, so not as limit.
Sensing unit 120 is such as taking unit, in order to obtain image sequence.Such as be used as sensing unit 120 with degree of depth video camera (depth camera) or stereocamera, and be arranged on the position that can get display unit 110 forward image.Such as, sensing unit 120 can be arranged near display unit 110 frame or ceiling is suitably located, so not as limit.
And for ask for picture accurate for the purpose of, sensing unit 120 can comprise multiple main taking unit and multiple auxiliary taking unit.Main taking unit is configured in the below of display unit 110 respectively, and auxiliary taking unit then can correspond to main taking unit and be configured in the top of display unit 110 respectively.The top of a main taking unit can a corresponding configuration auxiliary taking unit.
For example, Fig. 2 is the schematic diagram of the sensing unit allocation position according to one embodiment of the invention.Please refer to Fig. 2, display unit 110 is configured in and the position of ground G at a distance of height H.Height H such as being more than or equal to 80 centimetres, and is less than or equal to 90 centimetres, is so only at this and illustrates, not maximum height limit H.In the present embodiment, the below frame of display unit 110 is configured with three main taking units 201, and correspond to these three main taking units 201 and above display unit 110 frame configure three auxiliary taking units 202 in addition.Auxiliary taking unit 202 can solve the problem of user's overlap.Such as, when two users just walk abreast, then main taking unit 201 only can photograph front user (because of rear user cover by front user).And be configured in higher position due to auxiliary taking unit 202, thus can photograph two users simultaneously.The quantity of above-mentioned main taking unit 201 and auxiliary taking unit 202 and allocation position are only and illustrate, suitable change is carried out in visual user demand in actual applications.
At this, main taking unit 201 can be degree of depth video camera with auxiliary taking unit 202, also can be the equipment combining degree of depth video camera and colour camera.Such as, the image sequence obtained by colour camera, follows the trail of same user according to COLOR COMPOSITION THROUGH DISTRIBUTION.
Front end arithmetic element 130 is coupled to sensing unit 120, to receive image sequence.Front end arithmetic element 130 performs image-processing operations by image processing module 133 pairs of image sequences, and whether the existence of monitoring user also produces monitoring result according to this with the limb action of user thus.Rear end arithmetic element 140 is coupled to display unit 110 and front end arithmetic element 130.The monitoring result of rear end arithmetic element receiving front-end arithmetic element 130, and carry out interactive operation by operational processes module 143 according to monitoring result.
In the present embodiment, front end arithmetic element 130 and rear end arithmetic element 140 are the main frame of two platform independent.Utilize a main frame as front end arithmetic element 130, analyzed and arranged the data of the image sequence of main taking unit 201 and auxiliary taking unit 202 by front end arithmetic element 130.Recycle the higher main frame of another arithmetic capability as rear end arithmetic element 140, to process through front end arithmetic element 130 reduced data.
Such as, the image sequence obtained by front end arithmetic element 130 pairs of sensing units 120 carries out image-processing operations, and the data obtained after process (as: existence of monitoring user whether and the limb action of user and the monitoring result that produces) are sent to rear end arithmetic element 140, then by rear end arithmetic element 140 according to controlling content shown in display unit 110 from the data received by front end arithmetic element 130.And in other embodiments, front end arithmetic element 130 and rear end arithmetic element 140 also may be incorporated in same main frame.Or rear end arithmetic element 140 is combined to improve its arithmetic capability by multiple host, do not limit at this.
Front end arithmetic element 130 comprises the first processing unit 131 and the first storage element 132.Rear end arithmetic element 140 comprises the second processing unit 141 and the second storage element 142.Above-mentioned first processing unit 131 and the second processing unit 141 are such as CPU (central processing unit) (Central Processing Unit, CPU), or the microprocessor (Microprocessor) of other programmable general services or specific use, digital signal processor (Digital Signal Processor, DSP), Programmable Logic Controller, special IC (Application Specific Integrated Circuits, ASIC), programmable logic device (Programmable Logic Device, or the combination of other similar devices or these devices PLD).And above-mentioned first storage element 132 and the second storage element 142 are such as the fixed of arbitrary form or packaged type random access memory (Random Access Memory, RAM), the combination of ROM (read-only memory) (Read-Only Memory, ROM), flash memory (Flash memory), hard disk or other similar devices or these devices.
Store images processing module 133 in first storage element 132.And store operation processing module 143 in the second storage element 142.Front end arithmetic element 130 performs the image processing module 133 in the first storage element 132, obtains a monitoring result to perform image-processing operations to image sequence.And rear end arithmetic element 140 performs the operational processes module 143 in the second storage element 142, perform corresponding action with the monitoring result obtained according to image processing module 133.
Namely the flow process of the method carrying out play multimedia content with polyhedron object is below described in conjunction with the system 100 of above-mentioned play multimedia content.Fig. 3 is the process flow diagram carrying out the method for play multimedia content with polyhedron object according to one embodiment of the invention.In step S305, front end arithmetic element 130 is analyzed by image processing module 133 image sequence that sensing unit 120 obtains and is obtained monitoring result, rear end arithmetic element 140 is made to perform corresponding action, as following steps S310 ~ step S335 by operational processes module 143 according to the monitoring result that front end arithmetic element 130 obtains.
In step S310, whether image processing module 133 is monitored exists user in trigger range, determines whether thus to show polyhedron object in display unit 110.Above-mentioned trigger range is the region being positioned at display unit 110 front, such as, be that being positioned at distance display unit 110 is more than 0.8 meter, the scope within 3 meters with the border of display unit 110 with wide.
For example, Fig. 4 is the schematic diagram of the range set whether existed in order to monitoring user according to one embodiment of the invention.Fig. 4 is vertical view.In the present embodiment, by the border with display unit 110 with wide, and be apart more than distance D1 with display surface 111 front of display unit 110 and in the region of below distance D3, be set as that trigger range 400(as illustrated with the dotted box).In addition, also can set an opereating specification 410(further in trigger range 400 and represent with oblique line at this), that is, with display surface 111 front of display unit 110 at a distance of being more than distance D1 and in the region of below distance D2.To be described in subsequent step about opereating specification 410.
Whether front end arithmetic element 130 performs image-processing operations (such as based on the comparison of a portrait feature) by image processing module 133 pairs of image sequences, enter in trigger range 400 with monitoring user.If monitor in trigger range 400 and there is user, perform step S315, show polyhedron object in display unit 110 by rear end arithmetic element 140.Such as, image processing module 133 monitor there is user in trigger range 400 time, front end arithmetic element 130 can be sent to rear end arithmetic element 140 by relevant monitoring result.Now, rear end arithmetic element 140 just can be known in trigger range 400 and has user, and then produces polyhedron object by operational processes module 143, and is presented in display unit 110 by polyhedron object.
For example, Fig. 5 is the schematic diagram of the polyhedron object according to one embodiment of the invention.Polyhedron object 500 shown in Fig. 5 is hexahedron, comprises 6 broadcasting faces 501 ~ 506.As shown in Figure 5, the side being looped around polyhedron object 500 by broadcasting face 501 is clockwise broadcasting face 502,503,504, and the top being positioned at polyhedron object 500 is broadcasting face 505, and the below being positioned at polyhedron object 500 is broadcasting face 506.At this, the quantity playing face is only and illustrates, in other embodiments, the quantity in the broadcasting face of polyhedron object can be more than 3,4 or 5.
On the other hand, there is user if do not monitor in trigger range 400, then can not perform step S315, but continue to perform step S310.Such as, user is positioned at the scope with display surface 111 front within being distance D1 or user is positioned at display surface 111 front in the situations such as the scope for more than distance D3, all can not perform step S315.
After display polyhedron object 500, in step s 320, whether image processing module 133 monitoring user is arranged in the opereating specification 410 of trigger range 400.Such as, whether front end arithmetic element 130 performs the comparison based on portrait feature by image processing module 133 pairs of image sequences, enter in opereating specification 410 with monitoring user.When image processing module 133 monitor user be arranged in the opereating specification 410 of trigger range 400 time, in step S325, the interactive model of operational processes module 143 meeting deactivation polyhedron object 500, and a content of multimedia is assigned respectively in each broadcasting face 501 ~ 506 of polyhedron object 500.That is, when monitoring user in trigger range 400, then showing polyhedron object 500, there is user if monitor further in opereating specification 410, then operational processes module 143 plays different content of multimedia in each broadcasting face 501 ~ 506; If do not monitor in opereating specification 410 and there is user, then can not perform step S325, but continue to perform step S320.
In addition, operational processes module 143 also when showing polyhedron object 500 at display unit 110, can be broadcast and putting different content of multimedia at random in multiple broadcasting faces 501 ~ 506 of polyhedron object 500.
Afterwards, under interactive model, user just can manipulate polyhedron object 500.In step S330, the limb action that image processing module 133 monitoring user produces in opereating specification 410.Then, in step S335, operational processes module 143, according to limb action, performs corresponding operation to polyhedron object 500.Such as, the gesture of user in monitoring image sequence, and whole polyhedron object 500 is performed to the operations such as rotation (comprising rotation, left rotation and right rotation or oblique rotation up and down), towing or convergent-divergent, or perform the function in the operation interface of one of them broadcasting face (being such as right against the broadcasting face of user).
In addition, the formation of respectively for example bright image processing module 133 and operational processes module 143 wherein a kind of embodiment more below.Fig. 6 is the calcspar of the image processing module according to one embodiment of the invention.Fig. 7 is the calcspar of the operational processes module according to one embodiment of the invention.Be described below in conjunction with Fig. 1, Fig. 4 and Fig. 5.
In figure 6, image processing module 133 comprises portrait monitoring modular 601, behavior monitoring module 602 and portrait analysis module 603.Portrait monitoring modular 601 in order to monitoring user existence whether.Behavior monitoring module 602 is in order to the limb action of monitoring user.Portrait analysis module 603, in order to analyze the body region of user, face area, hand region and foot area at least one of them characteristic information, obtains analysis result thus.
In the figure 7, operational processes module 143 comprises object control module 701, mode switch module 702, multimedia assignment module 703, execution module 704 and statistical module 705.Object control module 701 in order to produce polyhedron object 500, and shows polyhedron object 500 at display unit 110.Mode switch module 702 is in order to start or to close the interactive model of polyhedron object 500.Multimedia assignment module 703 in order to assign multiple content of multimedia respectively in multiple broadcasting faces 501 ~ 506 of polyhedron object 500.Execution module 704, in order to according to limb action, performs corresponding operation to polyhedron object 500.Statistical module 705 according to the operation performed by each content of multimedia, can calculate the operating frequency of each content of multimedia.
Fig. 8 A and Fig. 8 B is the process flow diagram of the method for another play multimedia content according to one embodiment of the invention.Wherein a kind of application that the present embodiment is the embodiment shown in Fig. 3.Be described below in conjunction with Fig. 1, Fig. 4 ~ Fig. 7.
In the present embodiment, in front end arithmetic element 130, set trigger range 400 be in advance present in trigger range 400 with opereating specification 410(opereating specification 410), and then portrait monitoring modular 601 monitors whether there is user in trigger range 400 or opereating specification 410 according to set data.Accordingly, rear end arithmetic unit 140 can utilize user whether to be present in trigger range 400 to determine whether carrying out remaining processing sequences.
Please refer to Fig. 8 A, portrait monitoring modular 601 monitor user be positioned at trigger range 400 time, as shown in step S801, object control module 701 shows polyhedron object 500 at display unit 110.At this, object control module 701 can further according to the primary importance (position in display unit 110 front) at the current place of user, second place display polyhedron object 500 corresponding in display unit 110.
Then, in step S802, the follow the mode of polyhedron object 500 starts by mode switch module 702.And then, in step S803, under follow the mode, carried out the mobile message of monitoring user by behavior monitoring module 602, make object control module 701 perform corresponding movement according to mobile message to polyhedron object 500.Such as, when user towards display unit 110 near and move, then along with the Distance Shortened between user and display unit 110, object control module 701 can amplify to relativity the size of polyhedron object 500, to present the sense of vision effect that polyhedron object 500 carries out movement toward the front.On the other hand, when user moves away from display unit 110, along with the lengthening distance between user and display unit 110, object control module 701 can reduce the size of polyhedron object 500 to relativity, to present polyhedron object 500 to carry out movement sense of vision effect towards rear.
Again such as, when user moves towards the right of display unit 110, then polyhedron object 500 can move to relativity by object control module 701 to the right.And when user moves towards the left of display unit 110, then polyhedron object 500 can move to relativity by object control module 701 to the left.
Then, in step S804, monitor in opereating specification 410 whether there is user by portrait monitoring modular 601.If user not yet enters to opereating specification 410, then perform step S805, whether further monitoring user leaves trigger range 400.
When portrait monitoring modular 601 monitor user leave trigger range 400 time, in step S806, removed the display of (release) polyhedron object 500 by object control module 701.Namely no longer in display unit 110, show polyhedron object 500.
On the other hand, if portrait monitoring modular 601 monitors user enter to opereating specification 410, then perform step S810, start interactive operation program.Describe in detail as shown in Figure 8 B about interactive operation program.
Please refer to Fig. 8 B, when user is positioned at opereating specification 410, the follow the mode of polyhedron object 500 is still and is activated, and is not closed.In step S811, whether be intended for display unit 110 by portrait monitoring modular 601 monitoring user.Such as, portrait monitoring modular 601 monitors in obtained image sequence whether have face by human face recognition algorithm, judges whether user is intended for display unit 110 thus.In the present embodiment, subsequent step S812 can't be performed when not monitoring user and being intended for display unit 110.
When monitoring user and being intended for display unit 110, in step S812, started the interactive model of polyhedron object 500 by mode switch module 702.Under interactive model, polyhedron object 500 presents exercisable state.Then, in step S813, analyze the body region of user, face area, hand region and foot area at least one of them characteristic information by portrait analysis module 603, obtain analysis result thus.Such as, above-mentioned analysis result comprises the height, sex, age etc. of user.
In step S814, multimedia assignment module 703, according to above-mentioned analysis result, determines the content of multimedia that broadcasting face 501 ~ 506 is assigned.Such as, if user is adult male, then in broadcasting face 501 ~ 506, automotive-type advertisement, razor advertisement etc. is assigned.If user is working clan women, then in broadcasting face 501 ~ 506, assign cosmetics or skin care products advertisement.If user is auld silver-haired people, then in broadcasting face 501 ~ 506, assign health food advertisement.At this, the content of multimedia that each broadcasting face 501 ~ 506 is assigned is not for identical.
In step S815, under interactive model, carry out by behavior monitoring module 602 limb action that monitoring user produces in opereating specification 410.And in step S816, execution module 704, according to limb action, performs corresponding operation to polyhedron object 500.Such as, relevant monitoring result (as: gesture analysis result), after monitoring the limb action of user (such as gesture), is sent to rear end arithmetic element 140 by behavior monitoring module 602.And by execution module 704, rear end arithmetic element 140 judges whether monitored gesture meets default gesture operation again.If meet default gesture operation, execution module 704 just can perform corresponding operation to polyhedron object 500.
In the present embodiment, the content of multimedia that can assign according to each broadcasting face 501 ~ 506 of object control module 701 and show an operation interface in each broadcasting face 501 ~ 506.Such as, when supposing that the content of multimedia that broadcasting face 502 is assigned is video file, then in broadcasting face 502, the operation interface of image player program can be demonstrated further.Again such as, when supposing that the content of multimedia that broadcasting face 501 is assigned is picture file, then in broadcasting face 502, the operation interface of Picture-viewer can be demonstrated further.Illustrate it is appreciated that aforesaid operations interface is only, not as limit.
In addition, also judge that limb action is for belonging to first kind action or Equations of The Second Kind action by behavior monitoring module 602.Above-mentioned first kind action case is as being rotate gesture, towing gesture or convergent-divergent gesture, and Equations of The Second Kind action is performedly in operation interface click gesture.When limb action belongs to first kind action, execution module 704 performs corresponding operation to whole polyhedron object 500.Such as, when behavior monitoring module 602 monitors right rotate gesture, execution module 704 by whole polyhedron object 500 to right rotation.When limb action belongs to Equations of The Second Kind action, execution module 704 only performs corresponding operation to a wherein operation interface playing face.Such as, when behavior monitoring module 602 monitors and clicks gesture in the scope of operation interface, then carry out function corresponding in executable operations interface according to by the position clicked, as play, suspending, replay.
And according to the respective operations performed by operation interface, the operating frequency of each content of multimedia also can be calculated further.At this, by statistical module 705 according to the respective operations performed by operation interface, calculate the operating frequency of each content of multimedia.And multimedia assignment module 703 with the classification belonging to the most much higher media content of operating frequency for foundation, in above-mentioned classification, select other multiple content of multimedia, change the content of multimedia in broadcasting face 501 ~ 506 thus.Such as, suppose that the content of multimedia of the original shuffle in broadcasting face 501 ~ 506 belongs to six kinds (classification A1 ~ A6) respectively.After the operating frequency being added up each broadcasting face by statistical module 705 in a Preset Time, if the operating frequency playing the operation interface shown by face 502 is the highest, then with the classification A2 of the content of multimedia in the face of broadcasting 502 for foundation, changing broadcasting face 501,503 ~ 506 is other content of multimedia in classification A2.
Then, in step S817, whether opereating specification 410 is still positioned at by portrait monitoring modular 601 monitoring user.When user is still positioned at opereating specification 410, perform step S815.Portrait monitoring modular 601 monitor user leave opereating specification 410 time, perform step S818, to be closed the interactive model of polyhedron object 500 by mode switch module 702.
After step S818 terminates, perform the step S805 of Fig. 8 A, utilize portrait monitoring modular 601 at any time monitoring user whether leave trigger range 400.In addition, when user is positioned at opereating specification, portrait monitoring modular 601 also can be utilized at any time to carry out monitoring user and whether to leave trigger range 400.When monitoring user and leaving trigger range 400, removed the display of (release) polyhedron object 500 by object control module 701.Such as under following several situation, remove the display of polyhedron object 500, that is, be not intended under display unit 110 just leaves the situation of trigger range 400 user; Starting interactive model but do not monitoring the limb action of user, and the situation that user just leaves trigger range 400 is inferior.
It is worth mentioning that, according to said method, if monitor more than two or two users to be positioned at opereating specification 410, then can correspond to the position of above-mentioned two users, position display two polyhedron objects corresponding in display unit 110.For example, Fig. 9 is the schematic diagram of the many people's operations according to one embodiment of the invention.Please refer to Fig. 4 and Fig. 9, there is user U1, U2, U3 in the front of display unit 110, and suppose that user U1, U2, U3 are all positioned at trigger range 400.Therefore, correspond in display unit 110 in the position of user U1, U2, U3 and show polyhedron object O1, O2, O3 respectively.And when user U1, U2, U3 enter opereating specification 410, just can operate polyhedron object O1, O2, O3 respectively.Operation about polyhedron object O1, O2, O3 can refer to the step of above-mentioned Fig. 3 or Fig. 8 A ~ Fig. 8 B, omits related description at this.
In sum, in the above-described embodiments, when user leans on into (entering trigger range) during display unit, in display unit, a polyhedron object is shown, and user closer to (entering opereating specification) during display unit, allow user operate polyhedron object.Thus, the enjoyment in use can be improved, and then improve user and rest on the wish before display unit 110.In addition, utilize polyhedron object to play different content of multimedia respectively in multiple broadcasting face, the exposure probability of different multimedia content can be increased, and provide that user is multiple browses selection.Further, also can according to different character features, initiatively and targetedly and intelligently change the multimedia messages play in broadcasting face.
Last it is noted that above each embodiment is only in order to illustrate technical scheme of the present invention, be not intended to limit; Although with reference to foregoing embodiments to invention has been detailed description, those of ordinary skill in the art is to be understood that: it still can be modified to the technical scheme described in foregoing embodiments, or carries out equivalent replacement to wherein some or all of technical characteristic; And these amendments or replacement, do not make the essence of appropriate technical solution depart from the scope of various embodiments of the present invention technical scheme.

Claims (20)

1. a method for play multimedia content, is characterized in that, comprising:
By analyzing the image sequence that sensing unit obtains, and performing corresponding action, comprising:
When monitor in trigger range there is user time, at display unit display polyhedron object;
When monitoring this user and being arranged in the opereating specification of this trigger range, start the interactive model of this polyhedron object, and assign multiple content of multimedia respectively in multiple broadcasting faces of this polyhedron object;
Under this interactive model, monitor the limb action that this user produces in this opereating specification; And
According to this limb action, corresponding operation is performed to this polyhedron object.
2. method according to claim 1, is characterized in that, when monitoring this user of existence in this trigger range, after this display unit shows the step of this polyhedron object, also comprises:
Start the follow the mode of this polyhedron object; And
Under this follow the mode, monitor the mobile message of this user, make this polyhedron object perform corresponding movement according to this mobile message.
3. method according to claim 1, is characterized in that, after the step of this interactive model starting this polyhedron object, also comprises:
When monitoring this user and leaving this opereating specification, close this interactive model of this polyhedron object.
4. method according to claim 1, is characterized in that, when monitoring this user and being arranged in this opereating specification of this trigger range, also comprises:
Monitor this user and whether be intended for this display unit; And
When monitoring this user and being intended for this display unit, start this interactive model of this polyhedron object.
5. method according to claim 4, is characterized in that, when monitoring this user and being intended for this display unit, also comprises:
Analyze the body region of this user, face area, hand region and foot area at least one of them characteristic information, obtain analysis result thus; And
According to this analysis result, determine those content of multimedia that those broadcasting faces are assigned.
6. method according to claim 1, is characterized in that, those broadcasting faces can according to those assigned content of multimedia one of them and show corresponding operation interface;
Wherein, according to this limb action, the step this polyhedron object being performed to this corresponding operation comprises:
Judge that this limb action is as belonging to first kind action or Equations of The Second Kind action;
When this limb action belongs to this first kind action, corresponding operation is performed to this polyhedron object whole; And
When this limb action belongs to this Equations of The Second Kind action, one of them this operation interface of faces is play to those and performs corresponding operation.
7. method according to claim 6, also comprises:
According to the respective operations performed by this operation interface, calculate those content of multimedia operating frequency separately; And
With the classification of the highest those content of multimedia of this operating frequency belonging to one of them for foundation, from this classification, select other multiple content of multimedia, change those broadcasting faces thus for other those content of multimedia.
8. method according to claim 6, is characterized in that, this first kind action is rotate gesture, towing gesture or convergent-divergent gesture, and this Equations of The Second Kind action for clicking gesture performed by this operation interface.
9. method according to claim 1, is characterized in that, after showing the step of this polyhedron object, also comprises in this display unit:
After monitoring this user and leaving this trigger range, remove the display of this polyhedron object.
10. method according to claim 1, is characterized in that, when monitoring this user of existence in this trigger range, the step showing this polyhedron object at this display unit comprises:
According to the primary importance at the current place of this user, the second place corresponding in this display unit shows this polyhedron object.
The system of 11. 1 kinds of play multimedia content, is characterized in that, comprising:
Display unit;
Sensing unit, obtains image sequence;
Front end arithmetic element, be coupled to this sensing unit, receive this image sequence, wherein this front end arithmetic element performs image-processing operations by image processing module to this image sequence, thus whether and the limb action of this user the existence of monitoring user, and produce monitoring result according to this; And
Rear end arithmetic element, is coupled to this display unit and this front end arithmetic element, receives this monitoring result, and wherein this rear end arithmetic element performs corresponding action by operational processes module according to this monitoring result;
Wherein, when this image processing module monitor in trigger range there is user time, this operational processes module is at this display unit display polyhedron object; And, when this image processing module monitor this user be arranged in the opereating specification of this trigger range time, this operational processes module starts the interactive model of this polyhedron object, and assigns multiple content of multimedia respectively in multiple broadcasting faces of this polyhedron object; And under this interactive model, this image processing module monitors this limb action that this user produces in this opereating specification, make this operational processes module according to this limb action, corresponding operation is performed to this polyhedron object.
12. systems according to claim 11, is characterized in that,
This image processing module comprises:
Portrait monitoring modular, monitors and whether there is this user in this trigger range or this opereating specification; And
Behavior monitoring module, monitors the limb action of this user;
This operational processes module comprises:
Object control module, produces this polyhedron object, and shows this polyhedron object to this display unit;
Mode switch module, starts or closes this interactive model of this polyhedron object;
Multimedia assignment module, assigns those different content of multimedia respectively in those broadcasting faces of this polyhedron object; And
Execution module, according to this limb action, performs this corresponding operation to this polyhedron object;
Wherein, when this portrait monitoring modular monitors this user of existence in this trigger range, the follow the mode of this polyhedron object is started by this mode switch module, with under this follow the mode, remove the mobile message of monitoring this user by behavior monitoring modular, and make this execution module carry out this polyhedron object of corresponding movement according to this mobile message.
13. systems according to claim 12, is characterized in that, when this portrait monitoring module monitors leaves this opereating specification to this user, are closed this interactive model of this polyhedron object by this mode switch module.
14. systems according to claim 12, it is characterized in that, this portrait monitoring modular also comprises this user of monitoring and whether is intended for this display unit, and when monitoring this user and being intended for this display unit, is started this interactive model of this polyhedron object by this mode switch module.
15. systems according to claim 12, it is characterized in that, this image processing module also comprises: portrait analysis module, analyzes the body region of this user, face area, hand region and foot area at least one of them characteristic information, obtains analysis result thus;
And this multimedia assignment module is according to this analysis result, determine those content of multimedia that those broadcasting faces are assigned.
16. systems according to claim 12, is characterized in that, this object control module can according to those play those assigned separately content of multimedia of face one of them and in each those broadcasting face operation display interface; And, the behavior, monitoring modular judged that this limb action is as belonging to first kind action or Equations of The Second Kind action, when this limb action belongs to this first kind action, this execution module performs corresponding operation to this polyhedron object whole, when this limb action belongs to this Equations of The Second Kind action, this execution module is play one of them this operation interface of faces to those and is performed corresponding operation;
Wherein, this first kind action is rotate gesture, towing gesture or convergent-divergent gesture, and this Equations of The Second Kind action for clicking gesture performed by this operation interface.
17. systems according to claim 16, is characterized in that, this operational processes module also comprises: statistical module, according to the respective operations performed by this operation interface, calculate those content of multimedia operating frequency separately;
And this multimedia assignment module with the classification of the highest those content of multimedia of this operating frequency belonging to one of them for foundation, from this classification, select other multiple content of multimedia, change those broadcasting faces thus for other those content of multimedia.
18. systems according to claim 12, is characterized in that, when this portrait monitoring module monitors is to after this user leaves this trigger range, are removed the display of this polyhedron object by this object control module.
19. systems according to claim 11, is characterized in that, this object control module is according to the primary importance at the current place of this user, and the second place corresponding in this display unit shows this polyhedron object.
20. systems according to claim 11, is characterized in that, this sensing unit comprises:
Multiple main taking unit, is configured in the below of this display unit respectively; And
Multiple auxiliary taking unit, is configured in the top of this display unit respectively corresponding to those main taking units.
CN201310571807.6A 2013-09-24 2013-11-13 Method and system for playing multimedia content by polyhedral object Pending CN104461317A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW102134315A TW201512899A (en) 2013-09-24 2013-09-24 Method and system for playing multimedia contents with polyhedral object
TW102134315 2013-09-24

Publications (1)

Publication Number Publication Date
CN104461317A true CN104461317A (en) 2015-03-25

Family

ID=52907460

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310571807.6A Pending CN104461317A (en) 2013-09-24 2013-11-13 Method and system for playing multimedia content by polyhedral object

Country Status (2)

Country Link
CN (1) CN104461317A (en)
TW (1) TW201512899A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309232A (en) * 2020-02-24 2020-06-19 北京明略软件系统有限公司 Display area adjusting method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101662537A (en) * 2009-08-18 2010-03-03 深圳市融创天下科技发展有限公司 Switching method of multi-picture video
CN101894511A (en) * 2010-07-15 2010-11-24 鸿富锦精密工业(深圳)有限公司 Electronic looking board
TW201303712A (en) * 2011-07-05 2013-01-16 Top Victory Invest Ltd Operating method for mirror display which can function as a mirror and also as an advertising display
CN203137756U (en) * 2012-12-28 2013-08-21 广东泰特科技有限公司 Intellisense multimedia interaction shopwindow
CN103268561A (en) * 2013-04-24 2013-08-28 北京捷讯华泰科技有限公司 Keyword voice advertisement playing method based on intelligent information terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101662537A (en) * 2009-08-18 2010-03-03 深圳市融创天下科技发展有限公司 Switching method of multi-picture video
CN101894511A (en) * 2010-07-15 2010-11-24 鸿富锦精密工业(深圳)有限公司 Electronic looking board
TW201303712A (en) * 2011-07-05 2013-01-16 Top Victory Invest Ltd Operating method for mirror display which can function as a mirror and also as an advertising display
CN203137756U (en) * 2012-12-28 2013-08-21 广东泰特科技有限公司 Intellisense multimedia interaction shopwindow
CN103268561A (en) * 2013-04-24 2013-08-28 北京捷讯华泰科技有限公司 Keyword voice advertisement playing method based on intelligent information terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111309232A (en) * 2020-02-24 2020-06-19 北京明略软件系统有限公司 Display area adjusting method and device

Also Published As

Publication number Publication date
TW201512899A (en) 2015-04-01

Similar Documents

Publication Publication Date Title
EP3079042B1 (en) Device and method for displaying screen based on event
TWI779343B (en) Method of a state recognition, apparatus thereof, electronic device and computer readable storage medium
JP6267861B2 (en) Usage measurement techniques and systems for interactive advertising
CN201383313Y (en) Interactive billboard and network type interactive advertising system
CN106730815B (en) Somatosensory interaction method and system easy to realize
US10354131B2 (en) Product information outputting method, control device, and computer-readable recording medium
CN112105983B (en) Enhanced visual ability
US20110128283A1 (en) File selection system and method
CN110716641B (en) Interaction method, device, equipment and storage medium
CN114257875B (en) Data transmission method, device, electronic equipment and storage medium
CN103336578A (en) Novel motion induction interactive advertising device
CN111861657A (en) Display screen, image display method thereof and computer-readable storage medium
CN104461317A (en) Method and system for playing multimedia content by polyhedral object
TW201514887A (en) Playing system and method of image information
CN108958690B (en) Multi-screen interaction method and device, terminal equipment, server and storage medium
US11205405B2 (en) Content arrangements on mirrored displays
CN114012746B (en) Robot, information playing method, control device and medium
KR20180077881A (en) System for intelligent exhibition based on transparent display and method thereof
US20170061491A1 (en) Product information display system, control device, control method, and computer-readable recording medium
EP3473015A1 (en) A method and system for delivering an interactive video
CN112764527A (en) Product introduction projection interaction method, terminal and system based on somatosensory interaction equipment
CN108076391A (en) For the image processing method, device and electronic equipment of live scene
US20190287285A1 (en) Information processing device, information processing method, and program
CN105183141A (en) Information interaction visible mirror
US20170061475A1 (en) Product information outputting method, control device, and computer-readable recording medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150325