NL2004709C2 - System and method for communicating information to a public. - Google Patents
System and method for communicating information to a public. Download PDFInfo
- Publication number
- NL2004709C2 NL2004709C2 NL2004709A NL2004709A NL2004709C2 NL 2004709 C2 NL2004709 C2 NL 2004709C2 NL 2004709 A NL2004709 A NL 2004709A NL 2004709 A NL2004709 A NL 2004709A NL 2004709 C2 NL2004709 C2 NL 2004709C2
- Authority
- NL
- Netherlands
- Prior art keywords
- visual
- processor
- audio
- messages
- predetermined
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q30/00—Commerce
- G06Q30/02—Marketing; Price estimation or determination; Fundraising
Landscapes
- Business, Economics & Management (AREA)
- Engineering & Computer Science (AREA)
- Accounting & Taxation (AREA)
- Development Economics (AREA)
- Strategic Management (AREA)
- Finance (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Economics (AREA)
- Marketing (AREA)
- Physics & Mathematics (AREA)
- General Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
Description
SYSTEM AND METHOD FOR COMMUNICATING INFORMATION TO A PUBLIC
The invention relates to a system for interactively communicating information to a public.
5
Traditionally information to a public, for instance for marketing purposes, is communicated by displaying video messages on a display and/or by playing sound messages through speakers. The communication can be interactive by 10 using input means where members of the public can provide information about themselves or their preferences, and by choosing the messages to be communicated based on said input.
15 One aim of the invention is to provide a system which is able to communicate information in a more effective and/or attractive manner.
According to the invention the system comprises: a 20 processor; a video camera connected to said processor for capturing photo or video data; output means connected to said processor for transmitting audio, visual and/or digital messages; a detector connected to said processor for capturing reaction data from one or more persons in said 25 public, such as a button, a touch screen, a proximity sensor, a depth sensor, an rfid reader, a Bluetooth receiver, a microphone, or said video camera; electronic memory connected to said processor comprising data representing a set of predetermined visual cues; electronic 30 memory connected to said processor comprising data representing a first set of audio and/or visual messages; electronic memory connected to said processor comprising data representing a second set of one or more audio, visual 2 and/or digital messages; electronic memory connected to said processor comprising a database wherein representations of each one of said visual cues are linked to one or more of said first set of audio and/or visual messages; electronic 5 memory connected to said processor comprising data representing at least one predetermined reaction from one or more persons in said public, such as touching said button, touching said screen, making a predetermined move, or making a predetermined sound.
10
The invention enables the interaction between the display surface and the passers-by. The system addresses persons based on detected visual cues and detects if the person reacts to this manner of addressing, leading to a more 15 natural way of drawing human attention to the display surface .
For detecting cues in the video capture data said processor is arranged to compare in real time said data received from 20 said video camera with said data representing said visual cues; and said processor is arranged to determine in real time from said comparison if and which visual cue or cues from said set of predetermined visual cues are present in said data received from said video camera.
25
In order to attract the attention of the public said processor is arranged to retrieve in real time from said database which audio and/or visual message or messages from said first set are linked to said visual cue or cues; 30 and said processor is arranged to transmit in real time said retrieved audio and/or visual message or messages from said first set through said output means.
3
In order to detect attention of the public said processor is arranged to compare in real time said data received from said detector with said data representing at least one predetermined reaction; and said processor is arranged to 5 determine in real time from said comparison if a reaction from said set of predetermined reactions is present in said data received from said detector.
In order to communicate a message if the public's attention 10 has been detected said processor is arranged to transmit in real time a further audio, visual and/or digital message or messages from said second set through said output means if it is determined that a reaction from said set of predetermined reactions is present in said data received 15 from said detector.
For communicating messages in dependency of type of reaction the system further preferably comprises electronic memory connected to said processor comprising a second database 20 wherein representations of each one of said reactions are linked to one or more of said second set of audio, visual and/or digital messages; wherein said processor is arranged to determine in real time from said comparison which reaction from said set of predetermined reactions is present 25 in said data received from said detector; wherein said processor is arranged to retrieve in real time from said second database which audio, visual and/or digital message or messages from said second set are linked to said reaction; and wherein said processor is arranged to transmit 30 in real time said retrieved audio, visual and/or digital message or messages from said second set through said output means .
4
For communicating further messages in dependency of further visual cues the system further preferably comprises electronic memory connected to said processor comprising a second or third database wherein representations of each one 5 of said visual cues are linked to one or more of said second set of audio, visual and/or digital messages; wherein said processor is arranged to compare in real time said data received from said video camera with said data representing said visual cues after it is determined that a reaction from 10 said set of predetermined reactions is present in said data received from said detector; wherein said processor is arranged to determine in real time from said comparison if and which visual cue or cues from said set of predetermined visual cues are present in said data received from said 15 video camera; wherein said processor is arranged to retrieve in real time from said second or third database which audio, visual and/or digital message or messages from said second set are linked to said visual cue or cues; and wherein said processor is arranged to transmit in real time said 20 retrieved audio, visual and/or digital message or messages from said second set through said output means.
Said set of predetermined visual cues preferably comprises one or more from the group consisting of: the presence of 25 people, the location of a person, the gender of a person, the approximate age of a person, motion of people, the number of people, the presence of a baby carriage, the presence of a child, the presence of a pet, the direction of movement of a person, a person following a predetermined 30 trajectory, the speed of a person, the pose of a person, the colour of a person's garment, the type of a person's garment (casual/formal/elegant), the presence of a cap on the head of a person, the presence of glasses on a person, the skin 5 colour of a person, the hair colour of a person, the presence of curls in a persons hair, the direction of gaze of a person, the duration of directed gaze of a person, the presence of predetermined emotions in a person's face, the 5 presence of predetermined gestures, the presence of electronic devices such as telephones or music players, the presence of a car or motorbike, the speed of a car or motorbike, a car or motorbike following a predetermined trajectory, the car or motorbike type, the car or motorbike 10 brand, the colour of a car or motorbike, the license plate number .
For detecting audio cues the system further preferably comprises a microphone connected to said processor for 15 capturing audio data; electronic memory connected to said processor comprising data representing a set of predetermined audio cues; electronic memory connected to said processor comprising an audio cue database wherein representations of each one of said audio cues are linked to 20 one or more of said first and/or second set of audio, visual and/or digital messages; wherein said processor is arranged to compare in real time said data received from said microphone with said data representing said audio cues; wherein said processor is arranged to determine in real time 25 from said comparison if and which audio cue or cues from said set of predetermined audio cues are present in said data received from said microphone; wherein said processor is arranged to retrieve in real time from said audio cue database which audio, visual and/or digital message or 30 messages from said first set are linked to said audio cue or cues; and wherein said processor is arranged to transmit in real time said retrieved audio, visual and/or digital 6 message or messages from said first or second set through said output means
Said set of predetermined audio cues preferably comprises 5 one or more from the group consisting of: the presence of people, the presence of laughing people, the presence of talking people; the presence of people talking on a telephone; the presence of people shouting; the presence of people talking to a baby or a child; the presence of people 10 singing; the presence of music from small ear speakers. If more than one microphone is used, also a person's location and movement can be detected and used as an audio cue.
Said output means preferably comprises one or more from the 15 group consisting of: a video beamer, a laser lighting display, a CRT display, a flat panel display, a speaker, a wireless data sender.
Said further audio, video and/or digital messages preferably 20 comprise one or more from the group consisting of: commercials, news, sports, product information, object information, site information, library information, collection information, digital archive information.
25 Said data received from said detector is preferably retrieved from one or more from the group consisting of: the received signal of a button, the received signal of a touch screen, the received signal of a proximity sensor, the received signal of a depth sensor, the received signal of an 30 rfid reader, the received signal of a bluetooth receiver, the received signal of a microphone, or the received signal of said video camera. Said rfid reader or said bluetooth 7 receiver may receive identification information about the device and/or its user.
Said predetermined move reaction preferably comprises said 5 one or more persons in said public moving towards the display surface on which said visual message or messages are displayed, or said one or more persons in said public moving towards an object to which the further audio and/or visual message or messages relate.
10
The invention further relates to method for interactively communicating information to a public, wherein a processor performs the steps of: comparing in real time data received from a video camera with data representing visual cues; 15 determining in real time from said comparison if and which visual cue or cues from a set of predetermined visual cues are present in said data received from said video camera; retrieving in real time from a database which audio and/or visual message or messages are linked to said visual cue or 20 cues; transmitting in real time said retrieved audio and/or visual message or messages through said speaker and/or said display means; comparing in real time data received from a detector with data representing at least one predetermined reaction; determining in real time from said comparison if a 25 reaction from a set of predetermined reactions is present in said data received from said detector; and transmitting in real time a further audio, visual and/or digital message or messages through said output means if it is determined that a reaction from said set of predetermined reactions is 30 present in said data received from said detector.
Said camera may for instance be aimed at one on or more from the group consisting of: a shop window, a storefront, a 8 living room, a children's corner, an exhibition room, the interior of a public transportation vehicle, an information screen, a billboard, a store shelf, a book shelf.
5 The invention also relates to computer software program arranged to run on a processor to perform the steps of the method for interactively communicating information to a public. The invention furthermore relates to a computer readable data carrier comprising a computer software program 10 arranged to run on a processor to perform the steps of the method for interactively communicating information to a public. Said data carrier may for instance be one from the group consisting of: a CD-ROM, a floppy disk, a tape, flash memory, system memory, a hard drive. Furthermore the 15 invention relates to a computer comprising a processor and electronic memory connected thereto loaded with a computer software program arranged to perform the steps of the method for interactively communicating information to a public.
20 The invention is described in more detail below with reference to the drawing in which: figure 1 is a perspective view of a system in accordance with the invention.
25
According to figure 1 a system for interactively communicating information to a public comprises a computer 1 with amongst others a processor unit, system memory and a hard drive, a video camera 2 with a microphone connected to 30 for instance a USB port of the computer 1, a video beamer 3 connected to a video out port of the computer 1, a projection surface 4 and a speaker 5 connected to an audio out port of the computer 1. A software program is loaded 9 from the hard drive into the system memory of the computer 1 in order to perform the steps of the communication method.
The system enables the surface 4 to actively interact with 5 humans 6 passing by. The system's software comprises an attract component and an interact component.
The function of the attract component is to draw the attention of passers-by 6 by addressing them directly. The 10 input from the camera 2 is interpreted by computer vision algorithms loaded on the computer 1, which analyze the captured environment for visual cues. This technology enables computers "to see", i.e. analyze and interpret visual input from the camera 2. If the camera 2 is recording 15 the environment in front of the surface 4, then the visual input can be analyzed by the computer 1. Consequently, a decision can be made based on the input about what type of content to play. Based on the detected visual cues, a response can be realized via voice, visual and/or audio 20 content, and may consist of a selection from a database of pre-recorded content, and may also be (speech) synthesized in dependence on the detected cues. For instance in a clothing accessories department, if the detected visual cues comprise the colour of a person's coat, the personalized 25 message may be: "Hey you in the cinsert colour of coat> coat, we have some nice accessories for you!", and at the same time an image of an umbrella and bag, which is selected from a database based on said detected coat colour, is displayed on the surface 4. Other examples include: a 30 character in a video on the surface 4 smiling or making gestures to a passer-by if said passer-by looks at the character; a video stream on the surface 4 reacting (i.e.
10 stopping, starting, play in slow motion) to the motion of a passer-by.
The interact component of the service can become active 5 after the attention of the person 6 is drawn and he or she becomes engaged with the display surface 4, for example when it is detected from the signal from the video camera 2 that the person 6 approaches the surface 4, or that the person is looking, pointing or waving at the surface 4 or at a certain 10 object on the surface 4. When a reaction to the response is detected, a customized content of an advertisement or commercial is displayed on the surface 4 and played through the speaker 5. In this phase there are many possibilities, for example: letting the person 6 play with options on the 15 display surface 4 (in a touch-screen fashion); displaying the person 6 inside the commercial using the advertised product, for instance in the advertised car; showing the person 6 with advertised clothes; letting the person 6 create a customized product, for instance a phone, by 20 drawing loose parts together; letting the person browse an archive/collection, letting the person control the streaming of a video by their body movement. Both the attract and interact components use computer vision technology which interprets input from the camera 2 and/or microphone and 25 determines the customized content.
Some examples of visual cues, which with known prior art technology can be detected by the system, are: people's motion, number of people in the group, accompanying elements 30 (baby/baby-carriage/child), direction of movement, motion trajectory, speed of movement (detecting if a person is in a hurry), type of motion (walking/running/bending over/failing), gait type and characteristics, appearance 11 (color of clothes, caps/hoods, type of clothes: coat/short-sleeves/suit, style: casual/formal/elegant, sport team jerseys, eyeglasses, skin color, hair color, hair type: curly/long/short), gender, people's approximate age, 5 direction of gaze, duration of directed gaze, face analysis (smiling/laughing/crying) and emotion (joy/fear/disgust/irritation), gestures (waving/pointing/extended arms), using portable devices (talking on the phone/listening to music), car 10 motion/appearance (speed, trajectory, car type and brand, color, license plate details, and motorbike motion/appearance, speed, trajectory, type, color, bicycle type).
15
Detection of people in a video signal is described in: - N. Dalai and B. Triggs, "Histograms of oriented gradients for human detection", IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2005), 2005.
20 - M. Enzweiler and D. M. Gavrila, "Monocular Pedestrian
Detection: Survey and Experiments", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.31, no.12, pp.2179-2195, 2009.
25 Human motion analysis is described in: - D. M. Gavrila, "The Visual Analysis of Human Movement: A Survey", Computer Vision and Image Understanding, vol. 73, nr. 1, pp. 82-98, 1999.
30 Gesture analysis and recognition are described in:
- W. Tu, T. Tan, L. Wang and S. Maybank, "A Survey on Visual Surveillance of Object Motion and Behaviors", IEEE
12
Transactions on Systems, Man and Cybernetics, Vol. 34, nr. 3, 2004.
- S. Mitra and T. Acharya, "Gesture Recognition: A Survey", IEEE Transactions on Systems, Man and Cybernetics, Vol. 37, 5 nr. 3, 2007.
Detection of types of motion is described in: - E. Pogalin, A.W.M. Smeulders and A.H.C. Thean, "Visual quasy-periodicity", IEEE Conference on Computer Vision and 10 Pattern Recognition (CVPR 2008), 2008.
- S. Ali and M. Shah, "Human Action Recognition in Videos Using Kinematic Features and Multiple Instance Learning", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 2, pp. 288-303, 2010.
15
Detection of appearance cues such as hair or clothes color is described in: - D. Comaniciu and P. Meer, "Robust analysis of feature spaces: Color image segmentation", IEEE Conference on 20 Computer Vision and Pattern Recognition (CVPR 1997), 1997. Y. Deng and B.S. Manjunath, "Unsupervised Segmentation of Color-Texture Regions in Images and Video", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 8, pp. 800-810, 2001.
25 - Roberto Valenti, Nicu Sebe and Theo Gevers, "Image
Saliency by Isocentric Curvedness and Color", IEEE International Conference on Computer Vision, 2009.
Detection and recognition of different object categories is 30 described in:
- R. Fergus, P. Perona and A. Zisserman, "Object class recognition by unsupervised scale-invariant learning", IEEE
13
Conference on Computer Vision and Pattern Recognition (CVPR 2003), 2003.
- G. Csurka, C. Dance, L. Fan, J. Willamowski and C. Bray, "Visual categorization with bags of keypoints", IEEE
5 International Conference on Computer Vision (ECCV 2004), 2004 .
- J. Winn, A. Criminisi and T. Minka, "Object categorization by learned universal visual dictionary", IEEE International Conference on Computer Vision (ECCV 2005), 2005.
10
Object tracking is described in: - D. Comaniciu, V. Ramesh and P. Meer, "Kernel-based object tracking", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 5, pp. 564-577, 2003.
15 - M. Isard and A. Blake, "Condensation—conditional density propagation for visual tracking", International Journal of Computer Vision, vol. 29, nr. 1, pp. 5-28, 1998.
Face analysis, gaze detection and tracking, and emotion 20 analysis is described in: - T.F. Cootes, G.J. Edwards and C.J. Taylor, "Active Appearance Models", IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.23, No.6, pp.681-685, 2001.
R. Valenti, N. Sebe, T. Gevers and I. Cohen, "Machine 25 Learning Techniques for Face Analysis", Springer, page 159-188, 2008.
- Roberto Valenti and Theo Gevers, "Accurate Eye Center Location and Tracking Using Isophote Curvature", IEEE conference on Computer Vision and Pattern Recognition, 2008.
30 - Roberto Valenti, Zeynep Yucel and Theo Gevers, "Robustifying Eye Center Localization by Head Pose Cues", IEEE conference on Computer Vision and Pattern Recognition, 2009.
14 - L.P. Morency, J. Whitehill and J. Movellan, "Monocular Head Pose Estimation using Generalized Adaptive View-based Appearance Model", Image and Vision Computing, Elsevier, 2009.
5
These publications are incorporated herein by reference.
Some examples of audio cues, which can be combined with the above visual cues, are laughing, talking to friends in the 10 group, talking on the phone, calling somebody farther away, talking to a baby/child, singing, playing music from a portable device, talking to the system or the surface 4.
Some examples of situations where the system can be used, 15 are: attracting attention of passers-by to a shopping window, inviting customers to a storefront, inviting visitors at company trade shows/symposiums/fairs/automobile salons, personalized service (i.e. based on recognition/needs) of robot assistants, elderly care 20 (measuring attention time-span, action/motion recognition), child care/children's corners (of shopping malls/public institutions/physician's practices), serving personalized content (news/daily schedule details/calls information), personalized ambient intelligence (household/electronics 25 devices/regulation systems), personalized content (guide) in museums/cultural monuments/historical sites, assistance in browsing of library/video club/archive/collection, serving sports information (e.g. up to date match information/results) to fans, serving appropriate traffic 30 information to drivers (congestion routes/accidents/expected arrival times), and serving information about opening hours/waiting times at public institutions.
15
It will be appreciated by those skilled in the art that changes could be made to the embodiments described above without departing from the broad inventive concept thereof It is understood, therefore, that this invention is not 5 limited to the particular embodiments disclosed, but it is intended to cover modifications within the spirit and scope of the present invention as defined by the appended claims.
10
Claims (15)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NL2004709A NL2004709C2 (en) | 2010-05-12 | 2010-05-12 | System and method for communicating information to a public. |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NL2004709 | 2010-05-12 | ||
NL2004709A NL2004709C2 (en) | 2010-05-12 | 2010-05-12 | System and method for communicating information to a public. |
Publications (1)
Publication Number | Publication Date |
---|---|
NL2004709C2 true NL2004709C2 (en) | 2011-11-15 |
Family
ID=42335172
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
NL2004709A NL2004709C2 (en) | 2010-05-12 | 2010-05-12 | System and method for communicating information to a public. |
Country Status (1)
Country | Link |
---|---|
NL (1) | NL2004709C2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110393922A (en) * | 2019-08-26 | 2019-11-01 | 徐州华邦益智工艺品有限公司 | A kind of acoustic control down toy dog with projection function |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001045004A1 (en) * | 1999-12-17 | 2001-06-21 | Promo Vu | Interactive promotional information communicating system |
US20030088832A1 (en) * | 2001-11-02 | 2003-05-08 | Eastman Kodak Company | Method and apparatus for automatic selection and presentation of information |
US20040044564A1 (en) * | 2002-08-27 | 2004-03-04 | Dietz Paul H. | Real-time retail display system |
WO2007125285A1 (en) * | 2006-04-21 | 2007-11-08 | David Cumming | System and method for targeting information |
US20080059994A1 (en) * | 2006-06-02 | 2008-03-06 | Thornton Jay E | Method for Measuring and Selecting Advertisements Based Preferences |
US20090097712A1 (en) * | 2007-08-06 | 2009-04-16 | Harris Scott C | Intelligent display screen which interactively selects content to be displayed based on surroundings |
-
2010
- 2010-05-12 NL NL2004709A patent/NL2004709C2/en not_active IP Right Cessation
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001045004A1 (en) * | 1999-12-17 | 2001-06-21 | Promo Vu | Interactive promotional information communicating system |
US20030088832A1 (en) * | 2001-11-02 | 2003-05-08 | Eastman Kodak Company | Method and apparatus for automatic selection and presentation of information |
US20040044564A1 (en) * | 2002-08-27 | 2004-03-04 | Dietz Paul H. | Real-time retail display system |
WO2007125285A1 (en) * | 2006-04-21 | 2007-11-08 | David Cumming | System and method for targeting information |
US20080059994A1 (en) * | 2006-06-02 | 2008-03-06 | Thornton Jay E | Method for Measuring and Selecting Advertisements Based Preferences |
US20090097712A1 (en) * | 2007-08-06 | 2009-04-16 | Harris Scott C | Intelligent display screen which interactively selects content to be displayed based on surroundings |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110393922A (en) * | 2019-08-26 | 2019-11-01 | 徐州华邦益智工艺品有限公司 | A kind of acoustic control down toy dog with projection function |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230230053A1 (en) | Information processing apparatus, control method, and storage medium | |
US8154615B2 (en) | Method and apparatus for image display control according to viewer factors and responses | |
US9922271B2 (en) | Object detection and classification | |
US8723796B2 (en) | Multi-user interactive display system | |
US8810513B2 (en) | Method for controlling interactive display system | |
US9349131B2 (en) | Interactive digital advertising system | |
JP7224488B2 (en) | Interactive method, apparatus, device and storage medium | |
US20180181995A1 (en) | Systems and methods for dynamic digital signage based on measured customer behaviors through video analytics | |
TWI779343B (en) | Method of a state recognition, apparatus thereof, electronic device and computer readable storage medium | |
TWI492150B (en) | Method and apparatus for playing multimedia information | |
CN107206601A (en) | Customer service robot and related systems and methods | |
CN103760968A (en) | Method and device for selecting display contents of digital signage | |
US9589296B1 (en) | Managing information for items referenced in media content | |
Liu et al. | Customer behavior classification using surveillance camera for marketing | |
CN110716641B (en) | Interaction method, device, equipment and storage medium | |
US11528512B2 (en) | Adjacent content classification and targeting | |
Hasanuzzaman et al. | Monitoring activity of taking medicine by incorporating RFID and video analysis | |
CN113126629A (en) | Method for robot to actively search target and intelligent robot | |
US12008808B1 (en) | Location tracking system using a plurality of cameras | |
JP2017156514A (en) | Electronic signboard system | |
US10937065B1 (en) | Optimizing primary content selection for insertion of supplemental content based on predictive analytics | |
CN109947239A (en) | A kind of air imaging system and its implementation | |
NL2004709C2 (en) | System and method for communicating information to a public. | |
El-Yacoubi et al. | Vision-based recognition of activities by a humanoid robot | |
CN113724454A (en) | Interaction method of mobile equipment, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
V1 | Lapsed because of non-payment of the annual fee |
Effective date: 20131201 |