GB2511479A - Interacting toys - Google Patents
Interacting toys Download PDFInfo
- Publication number
- GB2511479A GB2511479A GB1222755.9A GB201222755A GB2511479A GB 2511479 A GB2511479 A GB 2511479A GB 201222755 A GB201222755 A GB 201222755A GB 2511479 A GB2511479 A GB 2511479A
- Authority
- GB
- United Kingdom
- Prior art keywords
- toy
- theme
- doll
- audio data
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 claims abstract description 143
- 230000004044 response Effects 0.000 claims abstract description 82
- 238000004891 communication Methods 0.000 claims abstract description 68
- 230000008569 process Effects 0.000 claims description 64
- 238000012545 processing Methods 0.000 claims description 41
- 238000013515 script Methods 0.000 claims description 29
- 230000003993 interaction Effects 0.000 claims description 17
- 230000015572 biosynthetic process Effects 0.000 claims description 13
- 230000000694 effects Effects 0.000 claims description 13
- 238000003786 synthesis reaction Methods 0.000 claims description 12
- 230000005669 field effect Effects 0.000 claims description 10
- 230000001419 dependent effect Effects 0.000 claims description 3
- 235000013305 food Nutrition 0.000 claims description 3
- 150000001875 compounds Chemical class 0.000 claims 4
- FGRBYDKOBBBPOI-UHFFFAOYSA-N 10,10-dioxo-2-[4-(N-phenylanilino)phenyl]thioxanthen-9-one Chemical compound O=C1c2ccccc2S(=O)(=O)c2ccc(cc12)-c1ccc(cc1)N(c1ccccc1)c1ccccc1 FGRBYDKOBBBPOI-UHFFFAOYSA-N 0.000 claims 1
- 238000012217 deletion Methods 0.000 claims 1
- 230000037430 deletion Effects 0.000 claims 1
- 230000002452 interceptive effect Effects 0.000 abstract description 7
- 241000270295 Serpentes Species 0.000 description 39
- 239000003607 modifier Substances 0.000 description 37
- 238000012360 testing method Methods 0.000 description 33
- 230000003190 augmentative effect Effects 0.000 description 31
- 230000007704 transition Effects 0.000 description 30
- 238000011161 development Methods 0.000 description 27
- 238000012423 maintenance Methods 0.000 description 22
- 238000004088 simulation Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 16
- 230000006870 function Effects 0.000 description 13
- 230000009471 action Effects 0.000 description 12
- 230000006399 behavior Effects 0.000 description 12
- 238000007906 compression Methods 0.000 description 12
- 230000006835 compression Effects 0.000 description 12
- 230000005236 sound signal Effects 0.000 description 12
- 230000008859 change Effects 0.000 description 11
- 230000036651 mood Effects 0.000 description 11
- 238000005070 sampling Methods 0.000 description 11
- 230000008901 benefit Effects 0.000 description 10
- 230000008676 import Effects 0.000 description 9
- 230000004048 modification Effects 0.000 description 9
- 238000012986 modification Methods 0.000 description 9
- 239000000872 buffer Substances 0.000 description 8
- 238000001228 spectrum Methods 0.000 description 8
- 238000003462 Bender reaction Methods 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 6
- 238000013139 quantization Methods 0.000 description 6
- 230000002441 reversible effect Effects 0.000 description 6
- 238000003860 storage Methods 0.000 description 6
- SHXWCVYOXRDMCX-UHFFFAOYSA-N 3,4-methylenedioxymethamphetamine Chemical compound CNC(C)CC1=CC=C2OCOC2=C1 SHXWCVYOXRDMCX-UHFFFAOYSA-N 0.000 description 5
- 241000110847 Kochia Species 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 238000004590 computer program Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 230000000737 periodic effect Effects 0.000 description 4
- FDQGNLOWMMVRQL-UHFFFAOYSA-N Allobarbital Chemical compound C=CCC1(CC=C)C(=O)NC(=O)NC1=O FDQGNLOWMMVRQL-UHFFFAOYSA-N 0.000 description 3
- 241000282326 Felis catus Species 0.000 description 3
- 241000699694 Gerbillinae Species 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 3
- 238000006731 degradation reaction Methods 0.000 description 3
- 235000015243 ice cream Nutrition 0.000 description 3
- 238000012544 monitoring process Methods 0.000 description 3
- 101100333320 Neurospora crassa (strain ATCC 24698 / 74-OR23-1A / CBS 708.71 / DSM 1257 / FGSC 987) end-3 gene Proteins 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 230000003416 augmentation Effects 0.000 description 2
- 230000003139 buffering effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 239000000969 carrier Substances 0.000 description 2
- 235000019219 chocolate Nutrition 0.000 description 2
- 238000013144 data compression Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 235000021185 dessert Nutrition 0.000 description 2
- 230000001976 improved effect Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 238000007493 shaping process Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 241000568443 Aname Species 0.000 description 1
- 241001327708 Coriaria sarmentosa Species 0.000 description 1
- 241001465754 Metazoa Species 0.000 description 1
- 241001104043 Syringa Species 0.000 description 1
- 235000004338 Syringa vulgaris Nutrition 0.000 description 1
- CCAZWUJBLXKBAY-ULZPOIKGSA-N Tutin Chemical compound C([C@]12[C@@H]3O[C@@H]3[C@@]3(O)[C@H]4C(=O)O[C@@H]([C@H]([C@]32C)O)[C@H]4C(=C)C)O1 CCAZWUJBLXKBAY-ULZPOIKGSA-N 0.000 description 1
- 235000002017 Zea mays subsp mays Nutrition 0.000 description 1
- 241000482268 Zea mays subsp. mays Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 239000002537 cosmetic Substances 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 230000006837 decompression Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000000977 initiatory effect Effects 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 239000012464 large buffer Substances 0.000 description 1
- 230000000670 limiting effect Effects 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000007639 printing Methods 0.000 description 1
- 230000002250 progressing effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000001902 propagating effect Effects 0.000 description 1
- 238000013102 re-test Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000002829 reductive effect Effects 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 235000011888 snacks Nutrition 0.000 description 1
- 239000000344 soap Substances 0.000 description 1
- 230000000087 stabilizing effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 235000012976 tarts Nutrition 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H3/00—Dolls
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H3/00—Dolls
- A63H3/28—Arrangements of sound-producing means in dolls; Means in dolls for producing sounds
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H30/00—Remote-control arrangements specially adapted for toys, e.g. for toy vehicles
- A63H30/02—Electrical arrangements
- A63H30/04—Electrical arrangements using wireless transmission
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
- G06F9/5016—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B5/00—Electrically-operated educational appliances
- G09B5/06—Electrically-operated educational appliances with both visual and audible presentation of the material to be studied
- G09B5/062—Combinations of audio and printed presentations, e.g. magnetically striped cards, talking books, magnetic tapes with printed texts thereon
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B20/00—Signal processing not specific to the method of recording or reproducing; Circuits therefor
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02M—APPARATUS FOR CONVERSION BETWEEN AC AND AC, BETWEEN AC AND DC, OR BETWEEN DC AND DC, AND FOR USE WITH MAINS OR SIMILAR POWER SUPPLY SYSTEMS; CONVERSION OF DC OR AC INPUT POWER INTO SURGE OUTPUT POWER; CONTROL OR REGULATION THEREOF
- H02M1/00—Details of apparatus for conversion
- H02M1/32—Means for protecting converters other than automatic disconnection
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02M—APPARATUS FOR CONVERSION BETWEEN AC AND AC, BETWEEN AC AND DC, OR BETWEEN DC AND DC, AND FOR USE WITH MAINS OR SIMILAR POWER SUPPLY SYSTEMS; CONVERSION OF DC OR AC INPUT POWER INTO SURGE OUTPUT POWER; CONTROL OR REGULATION THEREOF
- H02M3/00—Conversion of dc power input into dc power output
- H02M3/02—Conversion of dc power input into dc power output without intermediate conversion into ac
- H02M3/04—Conversion of dc power input into dc power output without intermediate conversion into ac by static converters
- H02M3/10—Conversion of dc power input into dc power output without intermediate conversion into ac by static converters using discharge tubes with control electrode or semiconductor devices with control electrode
- H02M3/145—Conversion of dc power input into dc power output without intermediate conversion into ac by static converters using discharge tubes with control electrode or semiconductor devices with control electrode using devices of a triode or transistor type requiring continuous application of a control signal
- H02M3/155—Conversion of dc power input into dc power output without intermediate conversion into ac by static converters using discharge tubes with control electrode or semiconductor devices with control electrode using devices of a triode or transistor type requiring continuous application of a control signal using semiconductor devices only
- H02M3/156—Conversion of dc power input into dc power output without intermediate conversion into ac by static converters using discharge tubes with control electrode or semiconductor devices with control electrode using devices of a triode or transistor type requiring continuous application of a control signal using semiconductor devices only with automatic control of output voltage or current, e.g. switching regulators
- H02M3/158—Conversion of dc power input into dc power output without intermediate conversion into ac by static converters using discharge tubes with control electrode or semiconductor devices with control electrode using devices of a triode or transistor type requiring continuous application of a control signal using semiconductor devices only with automatic control of output voltage or current, e.g. switching regulators including plural semiconductor devices as final control devices for a single load
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/50—Conversion to or from non-linear codes, e.g. companding
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63H—TOYS, e.g. TOPS, DOLLS, HOOPS OR BUILDING BLOCKS
- A63H2200/00—Computerized interactive toys, e.g. dolls
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
- G06F12/0246—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Business, Economics & Management (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Power Engineering (AREA)
- General Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Nonlinear Science (AREA)
- Toys (AREA)
Abstract
This invention relates to toys (10, 20, 30, 40 figure 1a). In particular, this invention relates to toys such as dolls that interact with each other. One aspect of this invention relates to a toy comprising: a processor 102; a memory 106 for storing at least one group of data, each at least one group comprising of a plurality of expressive responses, and each group representing a respective theme; an output for said expressive responses; the toy being adapted to exchange such expressive responses with another such toy; means for receiving an instructive response from a user; and means for altering the exchange of expressive responses between the toys in dependence upon the received user instructive response. Also disclosed is an interactive toy that can download data relating to an identifier from another toy, an interactive toy with assigned personality parameter, an authoring tool for creating themed data for interactive toys, a communication interface for an interactive toy, to an H-bridge circuit arrangement (920 figure 47), a method for detecting (960 figure 50, 980 figure 52) potentially corrupt or otherwise invalid signals, a memory controller for allocating RAM, an audio coding scheme (1000 figure 57, 1040 figure 58).
Description
Interacting Toys This invention relates to toys. In particular, although not exclusively, this invention relates to toys such as dolls that interact with each other. This invention also relates to an H-bridge circuit arrangement, which is particularly applicable, but by no means limited, for use in amplifiers for battery-powered devices, for which polarity protection is desired. The invention also relates to detecting potentially corrupt or otherwise invalid signals. The signals being signals such as audio signals. This invention also relates to an audio coding scheme.
Embedded computers and micro-processors have improved toys for children.
They have been used most extensively in educational toys, but have also been used in interactive toys. ActiMates® Barney® is one example of an interactive toy which responds to interaction from a child by appropriate vocalisations, and can sing-a-long to videos.
According to one aspect of the invention, there is provided a toy comprising: a processor; a memory for storing at least one group of data, each said at least one group comprising of a plurality of expressive responses, and each said group representing a respective theme; an output for said expressive responses; the toy being adapted to exchange such expressive responses with another such toy; means for receiving an instructive response from a user; and means for altering the exchange of expressive responses between the toys in dependence upon the received user instructive response.
According to another aspect of the invention, there is provided a method of communication between first and second toys comprising: storing at least one group of data on each toy, each said at least one group comprising of a plurality of expressive responses, and each said group representing a respective theme; exchanging expressive responses between first and second toys; receiving an instructive response from a user; andaltering the exchange of expressive responses between the toys in dependence upon the received user instructive response.
According to another aspect of the invention, there is provided a toy adapted to interact with another such toy, the toy comprising: a processor; a memory for storing audio data; an output for outputting said audio data; means for receiving an identifier from the other such toy; and means for downloading audio data relating to said identifier for subsequent output by the toy.
According to further aspect of the invention, there is provided a system for providing audio data to interacting toys, the system comprising: a server for storing identifiers corresponding to each of the toys, and audio data relating to said identifiers; a plurality of toys adapted to interact with one another and exchange identifiers when coming into contact with one another; and wherein the toys are adapted to download from the server the audio data related to the identifiers for subsequent output by each of the toys.
According to further aspect of the invention, there is provided a method of communication between first and second toys, the method comprising: exchanging identifiers between the toys; and downloading audio data relating to said identifiers for subsequent output by the toys.
According to further aspect of the invention, there is provided a toy adapted to interact with at least another such toy, the toy comprising: a memory for storing at least one group of data, each said at least one group of data comprising a plurality of expressive responses, and each said group representing a respective theme; an output for said expressive responses, the toy being adapted to exchange such expressive responses with other such toys; and means for selecting certain of the expressive responses in dependence on a personality parameter associated with the toy.
According to yet a further aspect of the invention, there is provided an authoring tool for creating themed data for toys, comprising means for receiving content relating to a particular theme; means for associating at least a part of the content with a personality parameter; means for processing said content to generate a set of instructions for operating said toy within said particular theme; and means for outputting said set of instructions.
According to another aspect of the invention, there is provided an authoring tool for creating themed data for toys, comprising means for receiving content in the form of a scripted dialogue relating to a particular theme; means for processing said content to generate a set of instructions for operating said toy within said particular theme; and means for outputting said set of instructions.
According to another aspect of the invention, there is provided an authoring tool for creating themed data for toys, comprising means for receiving content relating to a particular theme; means for processing said content to generate a plurality of different conversations each based on a set of expressive responses relating to a theme, wherein the conversations vary in dependence on a conversation condition; moans for generating a set of instructions for operating said toys within said particular theme; and means for outputting said set of instructions.
According to another aspect of the invention, there is provided a method of creating themed data for toys, comprising receiving content in the form of a scripted dialogue relating to a particular theme; processing said content to generate a set of instructions for operating said toy within said particular theme; and outputting said set of instructions.
According to another aspect of the invention, there is provided apparatus for creating themed data for toys, comprising means for receiving content in the form of a scripted dialogue relating to a particular theme; means for processing said content to generate a set of instructions for operating said toy within said particular theme; and means for outputting said set of instructions.
According to another aspect of the invention, there is provided an authoring tool for creating themed data for toys, comprising means for receiving content relating to a particular theme; means for processing said content to generate a set of instructions for operating said toy within said particular theme; means for synthesising audio data relating to said content; and means for outputting said set of instructions.
Various aspects of the above inventions also provide the following functionality / advantages: * A simplified authoring tool for creating themed data for toys.
* Arithmetic capability: a toy comprising: a processor; a memory coupled to said processor; and an output coupled to said processor, wherein said processor includes means for performing arithmetic operations (preferably addition, subtraction, multiplication, and/or division).
* Voice modulation: phrase versions in different audio files; or alternatively a doll with the capability of volume modulation and/or volume control.
* A toy or device that connects to a web site and contributes to a point score table (leader board).
According to another aspect of the invention, there is provided a communication interface for connecting a toy with a remote server, comprising means for detecting the toy; means for receiving an identification which identifies the toy; means for forwarding the identification on to the remote server; and means for transferring data between the remote server and the toy.
Polarity is defined herein as the orientation in which a power supply is connected to a circuit. In many electrical devices there is a correct polarity and an incorrect polarity. If a power supply is connected incorrectly, the device may not operate correctly, or circuit elements could become damaged. For example, a user may connect a DC battery with opposite/incorrect polarity (i.e. the battery is installed the wrong way round). Alternatively the user may connect an alternating current (AC) power source instead of a direct current (DC) power source, resulting in the AC current source providing undesirable current of opposite polarity for half the AC cycle.
To guard against the problems associated with power supplies of incorrect polarity, existing electrical devices often simply include one or more diodes in the circuit to stop current from flowing in the wrong direction. The use of diodes has a number of disadvantages. Firstly, a diode always consumes some power, even during correct operation of the circuit. In battery powered devices, this can lead to shortened battery life or reduced performance of the device. Furthermore, if the user connects an AC power source instead of a DC source, the device may show some signs of operation, but the circuitry may become damaged by the reverse-polarity component of the AC power.
An alternative solution addressing at least some of the above-mentioned problems is therefore needed.
According to a further aspect of the invention, there is provided an H-bridge circuit arrangement comprising: a pair of bipolar transistors and a pair of field-effect transistors (FET5), arranged such that each side of the H-bridge comprises a bipolar transistor and a field-effect transistor; and a pair of reverse-biased diodes, each of the reverse-biased diodes being connected between the base of a respective one of the bipolar transistors and signal ground; such that, in the event of a given bipolar transistor being subjected to polarity reversal, its base potential is substantially the same as its emitter potential such that it does not come into a state of conduction.
This asymmetrical "hybrid" arrangement, including the reverse-biased diodes, provides the advantage that, if the power supply to the H-bridge is reversed, the bipolar transistors do not turn on, and the H-bridge circuit is thereby protected.
Preferably, on each side of the H-bridge, the collector of the bipolar transistor is connected to the drain of the accompanying field-effect transistor.
In a presently-preferred embodiment, the bipolar transistors are PNP bipolar transistors and the field-effect transistors are MOSFETs. Other bipolar and field-effect transistor types are possible, though, as those skilled in the art will appreciate.
The circuit arrangement may further comprise a power supply arranged to supply power to the H-bridge. Preferably no diode is provided between the power supply and the H-bridge, thereby avoiding an undesirable voltage drop (across such a diode) and consequently lost electrical energy. Instead, an inductor may be connected between the power supply and the H-bridge.
The power supply may be a battery, to supply (ideally) a constant DC voltage.
Audio signals are commonly encoded, compressed, transmitted, decoded, stored and have various other processes performed on them prior to actual playback.
These processes are often necessary to facilitate transmission and subsequent playback, or to save space on the audio playing device. In any digital or analogue audio processing, there is a chance of the signal becoming corrupted due to errors propagating; for example the encoding and decoding processes not being exactly inverse, the compression removing important data, or data loss during storage.
Countermeasures to this include simply repeating the signal so that the probability of the same error appearing in all repetitions is very low. This is inefficient as it multiplies the amount of data by at least two. A more advanced countermeasure is to introduce a checksum' into the data stream, produced by code such as a Cyclic Redundancy Check (CRC). This is a data segment which is derived from the original (correct) data. The receiver uses the same algorithm to generate a checksum on the received data and thus can determine if the data has become corrupted. This can be a processor-intensive process, and may not suit situations where the processing power is limited or the size of data is particularly large. Furthermore, if the checksum itself is corrupted, the receiver may incorrectly assume the data stream itself as being corrupted.
An alternative solution to detecting corwption in a signal, avoiding at least some of the potential disadvantages with the prior art, would be advantageous.
According to another aspect of the invention, there is provided a method of processing a signal, the method comprising: periodically adding a characteristic signal into a first signal, thereby creating an augmented signal; (optionally) encoding the augmented signal; (optionally) decoding the augmented signal; sampling the (decoded) augmented signal for a sampling period greater than the period of the characteristic signal; and determining the presence of the characteristic signal in the (decoded) augmented signal, the presence of the characteristic signal in the (decoded) augmented signal indicating that the augmented signal has not been corrupted and/or is from an authentic source..
According to another aspect of the invention, there is provided a method of processing a decoded signal, the method comprising: sampling the decoded signal fora predetermined sampling period; and determining the presence of a periodic characteristic signal in the decoded signal, the presence of the characteristic signal indicating that the decoded signal has not been corrupted and/or is from an authentic source.
According to another aspect of the invention, there is provided a method of processing a decoded signal, the method comprising: analysing the frequency spectrum of the decoded signal; and determining the presence of a periodic characteristic signal in the decoded signal, the presence of the characteristic signal indicating that the decoded signal has not been corrupted and/or is from an authentic source.
According to another aspect of the invention, there is provided a method of encoding a signal, the method comprising: receiving a first signal; periodically adding a characteristic signal into the first signal, thereby creating an augmented signal; and encoding the augmented signal.
According to another aspect of the invention, there is provided apparatus for processing a decoded signal, the apparatus comprising: means for sampling the decoded signal for a predetermined sampling period; and means for determining the presence of a periodic characteristic signal in the decoded signal, the presence of the characteristic signal indicating that the decoded signal has not been corrupted and/or is from an authentic source.
According to another aspect of the invention, there is provided apparatus for processing a decoded signal, the apparatus comprising: means for analysing the frequency spectrum of the decoded signal; and means for determining the presence of a periodic characteristic signal in the decoded signal, the presence of the characteristic signal indicating that the decoded signal has not been corrupted and/or is from an authentic source.
According to another aspect of the invention, there is provided apparatus for encoding a signal, the apparatus comprising: means for receiving a first signal; and means for periodically adding a characteristic signal into the first signal, thereby creating an augmented signal.
According to another aspect of the invention, there is provided a system for processing a signal, the system comprising: means for periodically adding a characteristic signal into a first signal, thereby creating an augmented signal; (optionally comprising) means for encoding the augmented signal; (optionally comprising) means for decoding the augmented signal; means for sampling the decoded signal for a sampling period greater than the period of the characteristic signal; and means for determining the presence of the characteristic signal in the (decoded) augmented signal, the presence of the characteristic signal in the (decoded) augmented signal indicating that the augmented signal has not been corrupted and/or is from an authentic source.
Devices which process data in order to perform specific tasks invariably have a limited processor size; therefore, the overall operation of the device is often limited by the amount of Random Access Memory (RAM) associated with the processor. Processors with large amounts of RAM are generally more expensive than their smaller counterparts, and also tend to consume a larger amount of energy. Thus, for applications with limited battery life, or devices where cost is critical, processors with large RAM components are often not practical.
In standard non OS computers (for example flash drives) where RAM is limited, the compiler and linker control the memory map, and re-use of RAM is performed by Last-in-first-out' (LIFO) using the stack' mechanism. This mechanism can easily crash due to a stack overflow' where there is not enough RAM to be shared between competing tasks.
Therefore there is a need to improve the performance of devices with limited RAM without changing the hardware involved.
According to another aspect of the invention, there is provided a memory controller for allocating Random Access Memory (RAM), the controller comprising: means for allocating a portion of the available RAM to a first group of processing tasks; means for allocating the same portion of RAM to a second group of processing tasks; wherein the first group of processing tasks comprises write I erase tasks and the second group of processing tasks comprises decode I read tasks; and wherein the memory controller is adapted to control the tasks so that they are mutually exclusive.
According to another aspect of the invention, there is provided a Random Access Memory (RAM) device wherein the RAM allocation of one group of processing tasks is overlaid onto another group's RAM allocation, the two groups of processing tasks comprising: (a) write I erase (b) decode! read wherein processing tasks in group (a) are mutually exclusive to processing tasks in group (b).
According to another aspect of the invention, there is provided a method for allocation of Random Access Memory, the method comprising: allocating a portion of the available RAM to a first group of processing tasks; allocating the same portion of RAM to a second group of processing tasks; wherein the first group of processing tasks comprises write I erase tasks and the second group of processing tasks comprises decode! read tasks; and wherein the first and second groups of processing tasks are mutually exclusive to one another.
Audio coding schemes are used to reduce the number of bits (data) required to contain an audio signal whilst maintaining a certain degree of quality or reliability.
Smaller amounts of data result in faster transmission times and take up less storage space. Larger amounts of data can result in a signal of higher quality or one which is less liable to corruption. Furthermore, there is a wide choice of other attributes a coding scheme has including: 1. Lossy I lossless -Generally it is possible to compress and de-compress an audio signal without any loss of information, i.e. the output is bit-for-bit identical to the input. This form of compression is referred to as lossless. There is a theoretical limit to the amount of lossless compression that can be imposed upon a signal. If a higher compression is required then lossy compression must be used.
Although the bits delivered by a lossy compression system are not the same as those delivered by a lossless compression system, every effort is normally made to achieve only the minimum amount of audible degradation consistent with the amount of data compression required.
2. Fixed I variable data rate -Lossless methods of compression, by their very nature, yield a variable data rate, less data when the signal is quiet, more when it is complex. Lossy methods can deliver a variable rate, if designed for a constant low level of degradation, or a fixed data rate, if allowed to impose a variable amount of degradation.
3. Large I small buffering -More or less large buffers may be used, both during encode and decode. Such buffers allow the designer to try to smooth out the data rate of a variable data rate system to appear more like a fixed rate system, or to keep the delivered data rate below some defined limit.
4. High / low complexity -The computational complexity of the encode and decode processes may be large or small, normally, complex process deliver better performance but at a cost.
5. Low I high delay -Low delay systems are used in applications like telephony where the total encode-decode delay is limited. High delay systems are used for the encoding of data onto pre-recorded media (Digital Versatile Disc (DVD), BluRay etc.) where the whole of the audio data to be transported is available to the encoder before any encoded output is required.
For a particular signal and a particular application, there is an optimum combination of all these attributes. However, given a particular application and a range of different signals, a one size fits all' solution is unlikely. Therefore there is a need for an audio coding scheme which can suitably code a range of signals given a particular application.
According to another aspect of the invention, there is provided a method for encoding an audio signal, the method comprising the steps of: (a) normalising the peak level of the signal; (b) applying a gain transformation to the signal; (c) quantising the signal into a number of bits; (d) applying a pre-emphasis filter to the quantised signal; and (e) applying an encoder table to generate an encoded signal.
According to another aspect of the invention, there is provided a method for decoding audio data, the method comprising the steps of: (a) acquiring an encoded signal and encoder information; (b) applying a decoder table to the encoded signal; (c) applying an inverse pre-emphasis filter; and (d) applying an inverse gain transformation.
According to another aspect of the invention, there is provided apparatus adapted to encode an audio signal, the apparatus comprising: (a) means for normalising the peak level of the signal; (b) means for applying a gain transformation to the signal; (c) means for quantising the signal into a number of bits; (d) means for applying a pre-emphasis filter to the quantised signal; and (e) means for applying an encoder table to generate an encoded signal.
According to another aspect of the invention, there is provided apparatus adapted to decode an encoded audio signal, the apparatus comprising: (a) means for receiving an encoded signal and encoder information; (b) means for applying a decoder table to the encoded signal; (c) means for applying an inverse pre-emphasis filter; and (d) means for applying an inverse gain transformation.
Further features of the invention are characterised by the dependent claims.
The invention extends to methods and/or apparatus substantially as herein described with reference to the accompanying drawings.
The invention also provides a computer program and a computer program product for carrying out any of the methods described herein and/or for embodying any of the apparatus features described herein, and a computer readable medium having stored thereon a program for carrying out any of the methods described herein and/or for embodying any of the apparatus features described herein.
The invention also provides a signal embodying a computer program for carrying out any of the methods described herein and/or for embodying any of the apparatus features described herein, a method of transmitting such a signal, and a computer product having an operating system which supports a computer program for carrying out any of the methods described herein and/or for embodying any of the apparatus features described herein.
Any apparatus feature as described herein may also be provided as a method feature, and vice versa. As used herein, means plus function features may be expressed alternatively in terms of their corresponding structure, such as a suitably programmed processor and associated memory.
Any feature in one aspect of the invention may be applied to other aspects of the invention, in any appropriate combination. In particular, method aspects may be applied to apparatus aspects, and vice versa. Furthermore, any, some and/or all features in one aspect can be applied to any, some and/or all features in any other aspect, in any appropriate combination.
It should also be appreciated that particular combinations of the various features described and defined in any aspects of the invention can be implemented and/or supplied and/or used independently.
Furthermore, features implemented in hardware may generally be implemented in software, and vice versa. Any reference to software and hardware features herein should be construed accordingly.
These and other aspects of the present invention will now be described, by way of example only, with reference to the following figures, in which: Figure la is an illustration of various thomed toys and/or dolls; Figure lb is a schematic illustration of a doll; Figure 2 is a schematic illustration of a wireless communications dongle; Figure 3 is a schematic conversation flow diagram in which user-doll interaction is provided; Figure 4 is an example of a conversation flow diagram showing user-doll interaction; Figure 5 is a schematic diagram showing a toy with name audio files and name references connected to a website; Figure 6a is an example of a conversation tree; Figure 6b is an example of four roles with different personality values, and a doll with a personality value that has a closest match with one of the roles; Figure 6c is three examples of conditional conversation flow; Figure 7 is the Theme Development' window of authoring tool graphic user interface; Figure 8 is the Theme' window of the authoring tool graphic user interface; Figure 9 is a populated Theme' window of the authoring tool graphic user interface; Figure 10 is the Publish' form of the authoring tool graphic user interface; Figure 11 is the Voice Maintenance' form of the authoring tool graphic user interface; Figure 12 is the Names Maintenance' form of the authoring tool graphic user interface; Figure 13 is the Stock Phrase Maintenance' form of the authoring tool graphic user interface; Figure 14 is the Add Attribute' form of the authoring tool graphic user interface; Figure 15 is the context list of the authoring tool graphic user interface; Figure 16 is the Add Phrases' form of the authoring tool graphic user interface; Figure 17 is the Edit Phrases' form of the authoring tool graphic user interface; Figure 18 is the Add Role' form of the authoring tool graphic user interface; Figure 19 is the Import Audio' form of the authoring tool graphic user interface; Figure 20 is the Set Label' Form of the authoring tool graphic user interface; Figure 21 is the Set Attributes' Form of the authoring tool graphic user interface; Figure 22 is the context entry list with attributes set; Figure 23 is the Conditions' form of the authoring tool graphic user interface; Figure 24 is the Say' form of the authoring tool graphic user interface; Figure 25 is the Branch' form of the authoring tool graphic user interface; Figure 26 is the Transition' form of the authoring tool graphic user interface; Figure 27 is the Choose Phrase' form of the authoring tool graphic user interface; Figure 28 is the Choose Attribute' form of the authoring tool graphic user interface; Figure 29 is the Choose Numeric' form of the authoring tool graphic user interface; Figure 30 is the Choose Dolls Data' form of the authoring tool graphic user interface; Figures 31 to 33 are simulation windows of the authoring tool graphic user interface; Figure 34 is the Publish' form showing theme, topic, and scenario information; Figure 35 is the Edit Scenario Text' form of the authoring tool graphic user interface; Figure 36 is the Edit Topic Text' form of the authoring tool graphic user interface; Figure 37 is the Edit Theme Text' form of the authoring tool graphic user interface; Figure 38 is the Publish' form showing voice information; Figure 39 is the Edit Voice Text' form of the authoring tool graphic user interface; Figure 40 is the Publish' form showing name information; Figure 41 is the Edit Name Text' form of the authoring tool graphic user interface; Figure 42 is the Publish' form showing a log of website communication; Figure 43a is a storyboard view in the Story Creator tool; Figure 43b is a panel editing view in the Story Creator tool; Figure 44 is a schematic illustration of a computing device for the Story Creator tool; Figures 45a, 45b and 45c are exemplary screen shots of connection interfaces; Figure 46 is a circuit diagram showing an amplifier circuit incorporating a traditional FET-based H-bridge arrangement; Figure 47 is a circuit diagram showing an amplifier circuit incorporating a new hybrid H-bridge arrangement; Figure 48(a) is an example signal prior to augmentation with a characteristic signal; Figure 48(b) is an example signal augmented with a characteristic signal; Figure 49 is a flow diagram showing the signal augmentation process; Figure 50 is an example device for augmenting a signal; Figure 51 is a flow diagram showing the playback process of an augmented signal; Figure 52 is part of an example playback device; Figure 53 is a representation of the RAM usage of an example playback device; Figure 54 is a flow diagram showing an example signal encoding process; Figure 55 is a flow diagram showing an example signal decoding process; Figure 56 shows example curve bender graphs as used in the decode process; Figure 57 is an example signal encoder; Figure 58 is an example signal decoder; Figure 59 is a mathematically generated example filtered waveform; Figure 60 shows the waveform of Figure 59 with various curve benders applied; and Figure 61 shows the effect of various curve benders on a different example waveform.
In the figures, like elements are represented by like reference numerals.
The numerical values (e.g. of resistors and inductors) given in the figures are merely provided as examples, and altemative values are possible.
Background
The basic features and operation of such interacting toys are known in the art, for example in International Patent Publication Nos. W02006/1 14625, W02009/010760, W020101007336, or W02011/124916 (which are hereby incorporated herein by reference in their entirety); however a brief description is provided below to aid in the understanding of the present invention.
Children enjoy playing with dolls, and often incorporate them into their imaginary play. Dolls such as those shown in Figure la are able to interact more fully with children, and with each other, in such play. A first doll 10 and a second doll 20 have generic bodies 12, 22 which may be themed by adding dresses, shoes and accessories. In Figure Ia a first doll 10 having a generic body 12 represents a female adult and is themed as a ballerina, being dressed in a tutu with ballet shoes. A second doll 20 also having a generic body 22 represents a female adult, and is themed as a tennis player, having appropriate clothing and racket and ball accessories. The theme may be pre-programmed, determined by the downloaded/inputted data, or set by a key accessory (tennis racket, ballet shoes, or a theme tag) which can be sensed by the doll. The dolls' bodies may be manipulated into appropriate poses, as shown.
The following description relates to a toy, such as a toy doll, that is enabled to communicate with other such toys; the dolls are adapted to coordinate the speech between the dolls. In another embodiment the toy is a tank or another such vehicle; again, the tanks are adapted to communicate wirelessly with other such tanks to coordinate the behaviour of the tanks instead of the speech between the dolls. In general the toys are adapted to appear animate and in particular human, or human controlled. Figure la shows four examples of types of themed toys are: a ballerina doll 10; a tennis playing doll 20; a generic doll 30 walking a dog; and a toy 40 in the form of a tank. The toys are adapted to communicate wirelessly with other such toys within the theme.
Figure lb shows a schematic representation of the known doll, with the hardware components required to allow the doll to communicate, and perform other such tasks. The doll 100, as shown in Figure Ib, comprises a processor 102 that includes a wireless module 104. The processor is in communication with memory 106, ROM 108, and RAM 110. An IRIRF transmitter/receiver 112 is connected to the processor/wireless module and is enabled to transmit/receive signals to/from other such dolls. The doll is also connected to a loud speaker 114. A USB controller 116 is used to update the memory 106, and also to charge, via the charger circuitry 118, the battery 120.
The memory 106 stores information relating to conversations that the dolls can have, and is accessed by the processor when it is compiling speech. The ROM 108 is used to store permanent information relating to the doll, such as the doll's name and ID number. This information is used in the initialisation procedure when setting up a network of dolls. The RAM 110 stores information relating to a current conversation and is used in order to produce more realistic conversation by storing information relating to phrases already used in the current conversation
for example.
Each doll 100 contains in memory 106: a data set containing the doll's name, and other variables, for example variables defined during a conversation; a set of instructions which produces the conversation; and a set of audio data. The variables defined during the conversation are stored on the controller doll.
The dolls are adapted to download a theme, that is, the theme is typically in the form of an audio data file including expressive responses relating to a particular theme, via a PC from a website, and then converse in that theme with other such dolls.
A USB communications dongle is described in International Patent Publication No. W02009/010760 (which is hereby incorporated herein in its entirety by reference), that enables a PC to interact wirelessly with a toy. Figure 2 shows a schematic representation of the USB communications dongle 1600, attached to a PC 122, and in wireless communication with the dolls 100. The dongle contains a wireless module 204, an IR/RF transmitterlreceiver 212, and an interface 1602.
These components enable the downloading of a theme to a doll. Further, these components, except the interface 1602, are the same as contained within the doll 100, as described above. However, the PC 122 is utilised as the processor 1604, instead of the dongle having an independent processor as the doll 100 has, and so the PC effectively becomes a virtual doll able to communicate with the physical dolls 100. The virtual doll is provided with an animated avatar shown on the PC monitor. The avatar may be similar in appearance to the real doll, and the animation of the avatar may be synchronised with the speech of the doll. In order to run the conversations, the PC has stored in memory 1606 an emulator for emulating the processor of the toy.
A website is arranged to allow the user to download various themes, and also to interact with other users. This enables the users to interact both in the virtual world -via chat rooms, games, competitions, or the like -and in the physical world -by playing with other users with the communicating dolls.
USER-DOLL INTERACTIONS
Figure 3 shows a conversation flow diagram where a doll interacts directly with a user, allowing the user to influence the doll's subsequent doll-to-doll interactions.
In one example, a doll expresses a query, to which the user provides a response, in dependence on which the subsequent dialog between the dolls continues. A doll might for example ask its owner whether or not it should respond in a particular way during a doll-to-doll interaction. This affords advantages in the way in which a user can intervene in a doll-to-doll conversation. A physical button on the doll might be provided to enable a user to interact with, and instruct, the doll.
In Figure 3, the rectangular shapes indicate phrases expressed by a first doll, and oval dialogue shapes indicate phrases expressed by a second doll. At dialogue block 201 the second doll expresses a query and awaits the user's response, thus allowing the user to influence the course of the conversation with his response. The doll may continue on a default branch of the conversation if no response is received within a given period. If the doll receives a response within the given period then the conversation proceeds along the line determined by the user's response.
In an example of the user influencing doll-to-doll interaction a doll might ask its owner "Shall I accept her invitation to go play tennis?" after another doll has invited her to play tennis. If the user responds with an acceptance, then the doll-to-doll conversation carries on relating to the subject of playing tennis (the doll might subsequently ask its owner "Can I put my tennis kit on now?"). If however the user responds with a refusal then the course of the doll's interactions changes, and the doll-to-doll conversation continues in a new direction that no longer relates to playing tennis.
Other types of user interaction than responding to a doll query are possible. For example, a user may be able to instruct a doll participating in conversation to leave the conversation, and hence cause the doll to conclude the conversation, or excuse itself from the conversation. The user may also be able to instruct a doll to change the theme of a conversation, and hence cause the doll to conclude the conversation in the current theme and change to a conversation on another theme.
The change in the course of the doll's interactions may affect the theme, in particular by changing to a new theme; or it may affect the course of a dialogue within a theme.
Figure 4 shows an example of user intervention that changes the course of the dialogue, but stays within the same theme (in the illustrated example the theme is walking the dog'). The doll expresses a first query 220. Depending on the user's response (in this case either approve the doll's suggestion with yes' 224, or disapprove the doll's suggestion with no' 226), a second query 222 may be expressed by the doll -in the illustrated example the doll reformulates the same suggestion, for the user to either again disapprove, or this time approve. When the end of the dialogue block 230 is reached, a new dialog block may follow, for example within the same theme, or ending the theme to change to another theme.
User-doll interactions may be extended from the simple accept/refuse' scenario described above to permit more complex responses and dialog options. For example, the user could make a selection from a list of options ("Shall we go do our hair, bake a cake, or read a magazine now?") to influence the course of the doll's interactions. The input could be effected by (for instance) buttons on the doll, a remote control for the doll, or a computing device with a link to the doll.
For educational purposes a more sophisticated conversation between the doll and a child may provide additional benefits. For example, the doll could encourage the child to recall educational information, and respond with approval if the child provides a correct answer, thus assisting the child in learning the information (Let's do a quiz game! What's the capital of France? I'll give you a hint: it's also the capital of fashion!").
NAME FILES
In one example, it is possible for the owner of a doll to select the doll's name, for instance during the initialisation of the doll. In the course of dialogue, a doll may address other dolls by name. However, a doll only has a limited memory, and hence number of audio phrases available, and in this example is not capable of synthesising the audio data necessary to express another doll's name. If the owner is enabled to select the doll's name from a larger pool of possible names (for example 100 possible doll names), then it may not be efficient for each doll to have the entire selection of name data available. Further, if new names become available in the course of time, and new users start selecting the new names, then the existing users may not have the new names available.
To overcome this problem a doll is enabled to learn' to say a new doll's name by storing a reference associated with the new doll's name when first coming into contact with the new doll, and then retrieving an audio file associated with the new doll's name when next connecting to a server.
The concept may be extended to other variables that relate to a doll that may be spoken in the course of a conversation; for example pets' names, name of the doll's owner, name of the town or area the doll lives' in, a place of birth, a home town, a hobby or interest, a favourite colour, a favourite food, or other similar variables.
Figure 5 illustrates how a doll 100 acquires the ability to refer to (speak', or otherwise express) variables. When a doll 100 is exposed to a variable of another doll (e.g. a doll's name; or a doll's favourite pop star), it receives a signal from the other doll that contains a variable identifier 258. The identifier is compared to existing audio data 105 to determine if the audio data for that identifier is already available. This allows determination of whether the variable is a new, hitherto unfamiliar variable, or if the doll has already been exposed to the variable. If the relevant audio data is already available, then the doll can already refer to the -21 -variable in conversation. If the relevant audio data is not already available, then the identifier is stored in a required data identifiers part 107 of the doll's memory for subsequent retrieval of the appropriate audio data.
In the illustrated example, the variable is a name, and the name identifier is of the form Name[ID number]. Two name audio data files 103 are already available, in addition to the audio data file 101 for the doll's own name. Assuming the received identifier is to a name the doll does not already have, then the identifier is stored in a list 107 with other required data identifiers. The list 107 identifies data that is to be requested from the server 200 when the doll next connects to the server 200.
The user may have the option of selecting audio output settings with which the doll converses. Audio output settings include the voice in which the doll speaks (as is described in more detail below), the accent with which the doll speaks (Scottish, South African, Swiss), or even the language of the doll. The audio output settings are for example selected by the user during the initialisation of the doll. The audio output settings are stored in a doll audio output settings pad 256 of the doll's memory. To ensure that the audio data retrieved from the server 200 has the correct audio output settings (e.g. is in the correct voice) for the doll, the doll audio output settings 256 are used.
When the doll next connects to the server 200 (in the illustrated example via a computing device 260 connected to the internet 262) the doll submits its list 107 of required data identifiers. The doll also provides its audio output settings 256.
The server 200 has audio data available for the different combinations of identifier and audio output setting. In the illustrated example, the data is shown organised in tables 250 252, where one table 250 contains name audio data, a second table 252 contains other variable audio data (e.g. favourite animal). For each audio output setting (e.g. voice setting) audio data is stored for each possible identifier (e.g. name identifier). In the illustrated example, the audio output setting is specified to be A502', and one of the required identifiers is specified as NameO4'. The audio data file 254 specified by these constraints is located at the server and provided to the doll.
When the doll next encounters the other doll with NameO4, it receives a signal from the other doll that contains the NameO4' identifier. The doll can then use the audio data file 254 to speak (in the correct voice) the other doll's name and thereby refer to the other doll by name in the conversation. If the doll encounters a new doll that happens to have NameO4, then the doll can already address the new doll by name, as it already has the appropriate name audio data stored.
Additionally, or alternatively, the server can maintain a record of the doll's audio output settings, and ensure that the audio data provided to the doll is the correct audio data for the specified audio output data.
Additionally, or alternatively, the server can maintain a record of each doll's variables. In this case, when a first doll is exposed to a second doll, the first doll receives a signal from the second doll that contains an identifier that specifies the second doll (e.g. a unique doll-specific identifier). The second doll's identifier is stored at the first doll and submitted by the first doll when it next connects to the server. The server uses the second doll's identifier to retrieve the record for the second doll, and from that record determines the second doll's variables. The server provides to the first doll the audio data for the second doll's variables, according to the first doll's audio output settings. The audio data may be referenced back to the second doll's identifier (e.g. by naming -for instance Do11000022 Name audio). In this case a set of audio data is stored for each other doll, and some redundancy may occur -if for example there are two other dolls that happen to have the same name, then the same audio data is stored twice under different identifiers. Alternatively, the server may provide -in addition to the audio data -a look-up table or the like for the second doll's variables with which the doll references the audio data (e.g. Do11000022_Name = NameO4). In this case when the first doll encounters a third doll (with doll identifier Do11000033) that has the same name as the second doll (NameO4) then the first doll does not require downloading of the NameO4 audio data again, however upon receipt of the third doll's identification, the first doll cannot speak the third doll's name until it has obtained the look-up table linking the doll identification (Do11000033) to the audio data (NameO4).
In conversation, a doll can respond to another doll depending on whether the audio data for a variable is available or not. If the audio data for a variable is not available, then the conversation may for example take a course in which the doll asks questions regarding the variable (e.g. What's your favourite dessert?'). If the audio data for a variable is available, then then the conversation may for example take a course in which the doll refers to the variable in the conversation (e.g. Chocolate cake is your favourite dessert, right? I love baking cakes! Let's bake a chocolate cake!').
The process allows a doll to acquire the ability to refer to variables in conversation, and in addition ensures that the doll can do so correctly according to the doll's audio output settings. The process allows a large number of possible variables to which dolls can refer, so that the user has great freedom of choice and dolls and their conversation can be tailored to a large degree. By taking audio output settings into account, a variable specific to another doll is tailored according to the doll's own settings. Taking audio output settings into account provides the benefit of allowing the user to influence the doll (e.g. by choice of voice). Storing the audio data at the server and retrieving such audio data as required provides flexibility, which in turn allows the conversations to be updated and adapted frequently. It also allows central storage of the audio data, with only the required data stored at the doll. This minimisation of storage requirements at the doll enables more portable dolls, which is a favourable feature in a toy.
In one example, a name audio data is referred to by identifiers (or references) of the form Ox6nnn, where the nnn is an index to the particular name from a defined selection of names, for example 100 possible doll names. During the running of a conversation a doll may be required to speak a name by receiving a name reference Ox6nnn.
When a doll is required to speak a name, the conversation engine checks the name audio data cache to see if the appropriate name audio data is available. If the name audio data is available then the name is spoken. If the name audio data is unavailable then a record is made of the name reference Ox6nnn in a list of required names (required data identifiers' list). The list of required names is stored in a permanent fashion even if the doll is subsequently switched off.
When the doll next connects to a website or sewer (via a computing device and the internet) it delivers its list of required names, and receives the name audio data for the required names. The received name audio files are added to the name audio data cache. In an example where the name audio data cache can contain a maximum number, e.g. 10 name audio files, then the name audio data cache is made up of the doll's own name followed by the 9 most recently requested names. If the name audio data cache is already full upon receipt of new names, then the least recently used name audio files are discarded. The processor may for example determine, upon receipt of a name audio file, if the name audio file can be added to the name audio data cache without exceeding the maximum number of name audio files. If it is determined that the maximum number of name audio files the in the name audio data cache would be exceeded, then name audio files are deleted from the name audio data cache until the new name audio file can be added without exceeding the maximum number of name audio files. For example, the least recently used name audio file in the name audio data cache is deleted.
Authoring Tool -general, environment The authoring tool is an application which can be used to create conversation themes for multiple dolls. The authoring tool is also described in International Patent Publication No. W02010/007336 (which is hereby incorporated herein in its entirety by reference). In particular Figure 11 and the associated description of W020101007336 describes various aspects of the authoring tool. Briefly, the creation of conversations requires a significant amount of time due to the large number of potential branches that the conversation might follow. Figure 6a illustrates an example of a conversation tree. In order to make the process more efficient an authoring tool is provided to aid in this process. A client application runs on a computing device (personal computer, laptop, or the like) with the data stored on a server to allow either multiple users to work on the same theme, or for a single user to work on the same theme from different locations. A web service is provided on the sewer which provides an interface between the database and the client application. The client application communicates with the server via the internet. The client application formats requests to the web service, and hence the database, using XML, and transmits the data using the SOAP protocol.
The authoring tool (also referred to as the scenario development tool') permits the creation, development, testing, management and uploading (publishing) of scenarios and associated ancillary data. A scenario is based around a conversation tree as shown in Figure 6a. A scenario relates to a theme; in Figure 6a for example the theme is Walking the Dog'. A scenario is composed of (themed) data or content relating to (or within) a particular theme. A multitude of scenarios for the same theme may be available. A scenario may be associated with a single theme, or with more than one theme.
A scenario may be structured, as illustrated schematically in Figure 6a, in which the oval blocks represent the scene introduction, the rectangular blocks represent the main body of the scenario, and the rounded rectangular blocks represent scene endings. In the conversation tree illustrated in Figure 6a, a scene introduction is at the beginning of a conversation, then a main body branch follows. At the end of a main body branch the conversation may be looped back to a new main body branch, or (for example based on a probability-weighted selector) progressed to a scene ending. A scenario may also be a scripted dialog, resembling a theatre or film script rather than a conversation tree as illustrated in Figure 6a.
Scenarios are designed based on a defined number of roles (roles are virtual dolls, like roles in a play are virtual actors). The roles are given nicknames to help the scenario author remember the role's identity while writing a scenario.
PERSONALITY FITTING
Dolls as well as roles have a personality' or character'. A personality value (also referred to as a character' value) is defined by setting eight personality traits with values between 0 and 15. The personality traits cover a spectrum of what most people consider to be important' personality traits and so provide a way of characterising the personality of each role.
A doll's personality value may be set by its user or owner. The personality value may be modified by the user, for example via a computing device. The user may be able to select a pre-defined personality value from a list.
The eight personality traits are stored in a 32 bit number; the resulting personality value is stored in the compiled scenario file (for a role) or in the doll data file (for a doll). The personality value of a doll is stored in a permanent fashion in the doll data file, even if the doll is subsequently switched off.
The personality value is not the same as a mood. A mood, unlike a personality value, may vary from conversation to conversation. Moods may for example be stored as attributes. Moods are however not stored in the doll data file.
Alternatively, one or multiple of the personality traits may be reserved for mood parameters. Setting and storing of the personality value is described in more detail below.
A doll's personality value can be used in conversations (in particular at the initiation of conversations) so that the doll is able to choose a role such that the role's personality value is closest to the doll's personality value. For example if the user has defined a doll as thoughtful', then the doll has a tendency to select roles that are thoughtful'.
Figure 6b is an example of four roles 270 "Mary", "Alice", "Evie", and "Liz". Each role has a personality value comprising three different personality traits 274 "sporty", "thoughtful", and "silly". Each personality trait has a value between 0 and 15. For example the role "Mary" is quite sporty, whereas the role "Alice" is quite thoughtful. A doll 272, in the illustrated example named "Molly", also has a personality value comprising the three different personality traits 274. In the illustrated example "Molly" is quite thoughtful. When Molly joins a conversation that is designed with the four illustrated roles, Molly's personality value has a closest match with the "Alice" role, hence Molly's first choice of role is the "Alice" role.
The personality value can also be used to influence the conversation flow. For example a doll might prefer to go to the library instead of play tennis depending on level of sportiness. The use of personality value (or other attributes) for structuring the flow of a conversation is described in more detail below.
A mood value that can be altered during the course of a conversation may be defined as an attribute that persists between conversations. The mood value may be used for conditional conversation flow, allowing selection of conversation branches depending on a current mood value. A mood value may be changed depending on an action or outcome or the course of a conversation.
Voices Numerous voices are required for the dolls to have individual voices. This is advantageous as it allows the user to customise his or her doll. For example, a user that has two dolls can choose a deep voice for one doll, and a breathless' voice for the second doll. The choice of voice for a doll is made during the initialisation of a doll, for example when first connecting the doll to a website (via a computing device such as a PC). To accommodate different voices the audio fIles are available for each voice version, and the doll downloads the appropriate voice version of a scenario.
The dolls may be configured to enable them to say phrases or pads of phrases more or less loudly. This can be controlled for example by a doll attribute, such as a mood attribute. To achieve this meta-data may be associated with the audio file to influence the volume or volume modulation when the doll speaks a phrase.
Alternatively, if the doll is not adapted to control volume and volume modulation, a phrase is recorded or synthesised in a variety of versions with different modulations and volumes. The doll can then select the correct phrase to be spoken in dependence on the situation, as parameterised by attributes and conditions.
Scenario features Scenarios support the concept of a Golden Phrase'. A Golden Phrase' is a designated phrase that may be (but need not be) spoken within a scenario. In Figure 6a the Golden Phrase' of that scenario, indicated by italic bold font, is at the bottom left of the conversation tree. When a doll speaks the Golden Phrase' the doll earns a reward. To enable the reward, when a doll speaks the Golden Phrase' this fact is recorded by the doll. When the doll next connects to the website the doll's record is uploaded, and the appropriate reward is given (for example points for an online doll alias, usable to acquire accessories for the alias). Only the first doll to speak the Golden Phrase' during a conversation is entitled to the reward and records the event. The doll's record of having spoken a Golden Phrase' is stored in a permanent fashion, even if the doll is subsequently switched off, until such time as the record is uploaded.
By recording data relating to the conversations, such as the doll having spoken the Golden Phrase' or the number of conversations a doll has been involved in, when a doll connects to a web site this data can be registered and used to compare and/or reward dolls. For example, a point score table (leader board) can be incorporated in the online environment as a part of an online game.
Scenario development parameters include: * The maximum number of supported roles (described above) * The maximum number of supported voices * The maximum number of supported attributes (attributes are described in more detail below) * The maximum number of choices per event (events and choices are described in more detail below) When a scenario has been created it is given a unique scenario ID. The scenario ID includes a 32 bit number, a scenario name and optionally a scenario description (e.g. a paragraph of text describing the features of the scenario). The scenario description may for example be accessed by and displayed in the website.
Scenario operations A scenario can be saved in a (.tmx) file and recovered later for further editing.
When a scenario is complete it may be compiled. Compilation, if successful, produces a (.bin) file. If audio recordings or synthesised audio data of the phrases used in the scenario are available then the (.bin) file contains audio data as well as scenario data. If the audio data is not available then the (.bin) file contains just the scenario data.
The (.bin) file can be used with dolls and/or with a simulator (described in more detail below). The simulator can use the (.bin) file with or without audio data files.
Dolls require the audio data files to be present. This arrangement allows the testing of scenarios in a pre-audio phase, in the absence of audio data.
Once a scenario has been compiled it can be tested with the simulator part of the authoring tool. Simulators represent dolls and each simulator that is started represents a different doll. Simulators as well as dolls have a doll data file which contains the character' value (also referred to as the personality' value), as described above (as well as other data).
The conversation engine is used to run the scenario data in simulators as well as in dolls. The conversation engine is a program that runs in the simulator, and also in the actual dolls. It is responsible for processing the instructions contained in the scenario file that has been loaded. The conversation engine causes the appropriate doll to speak the appropriate audio phrase at the appropriate time.
The conversation engine also maintains a log of the number of times a simulator or doll has joined a conversation. A doll earns rewards in proportion to the number of times it joins a conversation. To enable the reward, when the doll next connects to the website, the doll's conversation log is uploaded and the appropriate reward is given (for example points for an online doll alias, usable to acquire accessories for the alias). In dolls the conversation log is stored in a permanent fashion, even if the doll is subsequently switched off, until such time as the log is uploaded.
When a simulator is started it requests to join the conversation in the scenario being tested. The simulator takes the unassigned role that most closely matches its character'. The first simulator started becomes the Controller' and has the first choice of role. Subsequent simulators become Clients' (also referred to as Slaves'). The maximum number of simulators that can join the conversation in the scenario is equal to the number of roles defined for that scenario. During the running of a conversation simulators may be exited' (simulating the removal of a doll from the conversation) or started up (simulating the arrival of a new doll).
Once sufficient simulators have been started the scenario conversation can be started. The Controller' simulator then runs the scenario data and simulates the logic of the scenario as designed by the authoring tool. A log of important events is maintained by each simulator outlining what is said, who says it, etc. If audio data is available then the audio data file is played so that the conversation can be followed aurally. Log files of each simulator are stored for subsequent examination, if required.
Following a test run with the simulators the scenario is modified and/or re-tested.
The ability to test a scenario in a pre-audio phase prevents committing resources to record or synthesise the audio data until the scenario is satisfactory. When the audio data is available testing with the simulator is repeated with the audio output. When a scenario has passed simulated audio testing the scenario is downloaded to dolls and the scenario is tested with the dolls. When a scenario has passed the testing on dolls, it is ready for submission (for example uploading to a website for distribution).
Tools to support various aspects of scenario development are included in the authoring tool. For example compilation of a scenario produces a Fhrases.txt' file which contains an indexed list of all the phrases used in the scenario. The Phrases.txt' file can then be used as a cue-sheet for recording the audio data.
The authoring tool allows scenario management, in particular maintaining the local environment to support the development, testing and uploading of scenarios, and the importing of audio files as appropriate. The local environment includes the following folders and files: * Base Folder -(with a default name). The following folders are sub-folders to this Base Folder.
* Names -This contains the file names.txt' which contains an indexed list of the text of each supported name together with its reference value. The index is of the form A0000n, which is a reference to the audio file containing the name. This folder also contains sub-folders named Voicel, Voice2 etc., which in turn contain the audio data for each supported name recorded or synthesised in the corresponding voice. The name audio files are named A0000n.wav and are referenced in the names.txt' file.
* Themes -This contains sub-folders containing scenario data. Each sub- -31 -folder is named with the scenario ID of the scenario it contains. Each scenario sub-folder may contain one, some, or all of the following types of file: o Phrases.txt' (containing an indexed list of all the phrases used in the scenario; produced by the tool when saving the scenario and useful for testing and also as a cue sheet for recording) o Scenario_Name.tmx' (a saved version of the scenario suitable for further editing) o Scenario_Name.bin' (containing the compiled scenario data in a pre-audio format, with no audio data, suitable for testing) o Scenario_Name_v.bin' (containing a temporary compiled version of the scenario in a pre-audio format suitable for the appending of compressed audio data files) o Scenario_Name_Vn.bin' (containing the compiled scenario data plus audio data files for voice n; this is the final format that is eventually downloaded to the dolls).
Each scenario sub-folder also contains sub-folders named Voicel, Voice2 etc., which contain the recorded or synthesised phrase audio data in the corresponding voice. These folders are populated with phrase audio data files at such time as phrase audio data files become available.
* Dolls -This contains sub-folders named Doll_ID (e.g. 32 bit integer in hexadecimal such as 0000001) which in turn contain the doll data file called MyData.txt', which contains: o reference to the doll's name audio file o the doll's character' value, as described above o the doll's voice, e.g. Voicel This data is primarily used by the simulator. Actual dolls contain their own doll data file.
* Voices -This contains sub-folders Voicel, Voice2 etc., which in turn contain the following files: o Description.txt' (a text description of the voice) o Sample.wav' (a sample recorded or synthesised in the corresponding voice) * Stock Phrases -This contains the file Phrases.txt', which contains an indexed list of the stock phrases. Stock phrases are a set of 5 phrases which are available to the conversation engine independent of any loaded scenario(s). Stock phrases can be used for such things as a battery low alert" or other error or status conditions. This folder also contains sub-folders Voicel, Voice2 etc., which contain the recorded or synthesised stock phrase audio files in the corresponding voice.
An example of the content of the theme data file (also referred to as a scenario data file) is presented in Appendix A. The authoring tool also enables scenario uploading, wherein scenario data is uploaded to the website. The scenario data that is uploaded includes: * a scenario ID (32 bit number) * a scenario name (text)
* a scenario description (text)
* a theme and topic category under which the scenario should appear on the website. If the theme and topic category do not exist on the website then they are created.
* a flag indicating if the scenario is in test mode * Scenario_Name_Vn.bin files for each supported voice (Vn indicates Voicel, Voice2 etc.). Each (.bin) file contains the scenario data file to which the compressed audio data of all the scenario's phrases, recorded in the corresponding voice, is appended.
In the conversation engine phrase audio data is referenced in the form Ox5nnn, where the nnn is an index to the actual phrase; the following referencing convention is used: * reference 0x5000 is to a null phrase * references 0x5001 to 0x5005 are to stock phrases, as described above * references 0x5006 onwards are to the scenario phrase audio data. These references are sequential with no numbers missing.
Name files for audio data of other dolls' names may be required within a scenario.
Name files (with the audio recording of a name) are not included with the scenario data (as the scenario is operable with an arbitrary set of names).
Instead, the name files are accessed by reference to data that is not part of the scenario file. Further detail relating to the way in which other dolls' names are handled has been discussed above.
Attributes assist in sculpting the scenario and the resulting conversation.
Attributes can be defined at doll level, scenario level or theme level, or otherwise.
For example, a doll attribute may be the colour of the doll's dress; a scenario attribute may be the colour of sandals the doll intends to buy; and a theme attribute may be the doll's favourite retail brand. Dolls can refer to attributes in a conversation. For example, if dolls are playing a game, then they can refer to each other's attributes -your turn, green!". Attributes can also be used to direct the flow of the conversation. In the game-playing example, a statement can depend on what occurred in a forgoing round -"oh, bad luck, poor you!".
Attributes can be set in the course of a conversation -for example a mood attribute of a winning doll can be set to happy', and this in turn may be used to determine the course of the conversation. Attributes may be defined for the sole purpose of controlling the conversation flow, or they may be defined for references within the conversation, or a combination of both.
The ability to perform arithmetic operations (addition, subtraction, multiplication, division) allows sophisticated control of conversation flow. For example, simple addition operations enable the conversation flow to cycle through all dolls that are present, with each doll going through various loops in dependence on the counter. The ability to perform arithmetic operations is also useful to implement scripts such as playing board games, where very often a position is incremented in dependence on a random dice roll.
Attributes, combined with the ability to perform arithmetic operations, allow sophisticated conditional conversation control. Conditional testing for example sets a condition value on the basis of an attribute, such as if the doll's dress is a certain colour. A condition value can be used to determine a conversation branch to pursue. This allows the conversation to flow with more structure and control than if a branch is selected randomly.
The authoring tool enables subroutines wherein the conversation can return to a main routine instead of just branching to a different routine. Different subroutines can be called in a controlled manner. In Figure 6a for example, at the end of a main body branch and depending on a weighted random factor, the conversation may either be looped back to another main body branch, or progressed to a scene ending.
Figure 6c shows examples of conditional conversation control (also referred to as conditional branching). Conditions that are tested are indicated by hexagonal shapes; rectangular shapes indicate phrases that a doll can express. In one example of a condition 280 the next phrase to be expressed is chosen in dependence on a personality trait of the doll speaking the next phrase (or the personality trait of the role). In another example of a condition 282 the next phrase to be expressed is chosen in dependence on a random input. In the illustrated example, the selection is biased toward the No' branch with 80% probability of selection. In another example of a condition 284 the next phrase to be expressed is chosen in dependence on a scenario attribute, in the illustrated example an attribute called weather' which can have values rainy', cloudy', or sunny'. The scenario attribute weather' may for example have been set in a previous pad of the conversation, where a random selection occurs between a group of phrases that express that the weather is either rainy, sunny, or cloudy.
The authoring tool provides checking to ensure that inputs (by importation or via forms) are syntactically correct.
The operation of a doll conversation is summarised as follows: 1. The header is read and held in memory = 16 + n*6 bytes: where n is the number of roles in this theme.
2. Processing the conversation consists of the following operations: 2.1. A starting role is chosen and the first context entry for that role is read into memory = 12 bytes.
2.2. The information in this context entry is processed.
a) The next role is chosen using the specified transition method.
b) Attributes are set using the set_attribute_block and the set method described in more detail below.
c) The condition_block is processed using the condition method described in more detail below.
d) A statement is chosen from the statement_choice_block using the say method described in more detail below.
e) A branch point to the next context entry is chosen from the branch_choice_block using the branch method described in more detail below.
f) The current role (doll) is told to say the thus chosen
statement (phrases).
g) The next context entry for the next role is then read into memory and processed depending on the statement mode (timing data).
h) This repeats until the conversation ends.
AUTHORING TOOL -further details and examples The authoring tool (also referred to as theme development tool) is now described in more detail with reference to an exemplary embodiment.
When the authoring tool is first executed the Theme Development graphic user interface (window) 300 shown in Figure 7 is displayed. The theme data directory 312 indicates the file directory in which the authoring tool operates; it can be selected as required. If a new empty folder is selected as the theme data directory the data structure defined above is created.
The Theme Development window 300 provides menu items "New" 301, "Open" 302, "Append" 303, "Import" 304, "Publish" 305, "Close" 306, "Voices" 307, "Names" 308 and "Stock Phrases" 309. Each of these menu items is now described in further detail.
New 301 -This allows a new scenario to be developed from scratch. Upon selection the Theme' window 400 shown in Figure 8 is displayed. This window allows the full features of a multi-doll scenario to be developed. Once developed the scenario may be saved, compiled and/or tested. Audio data may be synthesised, or recorded audio files may be imported, and the scenario may be compiled with the audio data for uploading to the website. The functionality of the Theme' window 400 is described in more detail below.
Open 302 -This allows an existing (e.g. previously saved) scenario to be opened for further development, editing etc. Upon selection the Theme' window 400 opens, with the fields populated as shown in Figure 9, or a window similar to it (the exact appearance depends on the particular scenario). This window shows the data defining the scenario: * Theme 401 -The name of the theme to which this scenario belongs (in the illustrated example Shopping').
* Topic 402 -The name of the topic to which this scenario belongs (in the illustrated example Mall').
* Scenario ID 403 -The unique identification number of this scenario (in the illustrated example 1'). This is a locally defined scenario ID and only needs to be unique in the local environment. When uploaded to the website the scenario ID is changed to match the globally unique ID assigned bythewebsite.
* Scenario Name 404 -The name of the scenario (in the illustrated example Mall Scenario One').
* Scenario Description 405 -A paragraph of text describing the main features of the scenario for inclusion in the website.
* Theme Attributes 406 -The list of theme attribute names. This form allows the addition or removal of existing theme attributes * Phrases 410 -The list of phrases the theme may use, which are indexed in the order shown in this list. This form allows the addition, removal or editing of phrases.
* Golden Phrase 415-specifies one of the phrases as the Golden Phrase.
* Roles 429 -The list of roles supported by this theme. This shows a role nickname (e.g. "Mary") followed by the role's personality traits (e.g. 0,0,0,0,0,0,0,0) for each supported role. This form allows the addition, removal or editing of roles.
* Role Attributes 430 -The list of role attribute names. This form allows the addition or removal of existing role attributes.
* Edit Context for Role 423 -This allows the context entries for each role to be edited. The context entries are what define the flow of the theme conversation.
The functions of the items in the Theme' window 400 are described in more detail below.
Append 303 -This allows an existing sub-scenario to be appended to the end of the current scenario. This is useful for re-using useful sub-scenarios in new settings.
Import 304 -This allows the creation of a theme from a simple play-like script. A play-like script has some or all of the following formatted text elements: * Theme: theme_name (optional) * Topic: topic_name (optional) * Scenario: scenario_name (optional)
* Description: description text (optional)
* Role: allows the definition of role personality traits (optional, multiple entries possible); the format is: role_name,personality_value.
* Phrase: phrase_text (optional, multiple entries possible); this allows for the definition of phrases in a pre-defined order.
* Acts: allows simple conversation branching (optional). This entry allows the definition of acts with alternative scenes so as to control the flow of the scenario. For each act a cluster of possible scenes is defined, and in the conversation one of the scenes is chosen at random. Sequential acts can be listed. If the Acts' statement is omitted then each scene runs in the written order.
* Scene: scene_name (optional) allows the scripting of scene subunits within the scenario: * Character: statement_text this allows entry of a statement to be spoken by a character. The full syntax is Character: {modifiers} statement text alternative_statement_text; various options are described in more detail with reference to the scripting language.
The following is an example of using the scripting language in a simple play-like script: Theme: Shopping Topic: Mall Scenario: Mall Scenario four Acts: (MalIl,Ma112) Mary: Scene: Maui Mary: Hi -[next.Name] Thanks for coming! A!ice: Are you kidding [prev.Name] -I'd never miss a trip to the ma!! with you guys! Liz: Or a chance to check out the sa!es! Evie: !ve got a!! my spending money with me! Liz: This is gonna be super fun! A!ice: So what do we hit first -The department stores -Shoe sa!e or the cosmetic stand? Mary: ! don't mind but I can't!eave without buying a new g!itter g!oss.
[vie: And I have to check out the new ribbon tie sanda!s at the shoe boutique! Mary: They wou!d tota!!y go with your boot cut jeans.
Evie: Tota!!y.
Liz: Oh and I need to check if that pussy bow dress comes in pink yet -They on!y ever have b!ah b!ue! Mary: Are you sure it's the dress you wanna check out? A!ice: And not the boy working on the coffee counter outside? [vie: Gigg!es --Tota!!y! Mary: Gigg!esl.
A!ice: Gigg!es2.
Liz: No way -That's SO not happening -It's a!! about the dress! Liz: This is gonna be super fun! Mary: So what are we waiting for? A!ice: Let's rofl! Evie: Tota!!y! Scene: Ma!!2 Mary: Hey! Liz: Hi! A!ice: I fina!!y got here! Evie: Yeah, I thought! was gonna be!ate.
Liz: !t!ooks!ike we've a!ready shopped. Wanna do something e!se? Mary: ! know, you cou!d stay over at mine! Liz: A s!eep over, I'm there! A!ice: I'!! bring the sodas.
Evie: !!! bring the popcorn.
A!ice: Sounds!ike we have everything covered! Evie: Now!et's go get ready! A!ice: Yes, see you at the s!eep over! Liz: Bye bye for now.
Mary: Oh and don't forget your PJ's! Liz: X -0-X -0.
When imported, this above script creates a scenario named "Mall Scenario four" within Theme "Shopping", topic "Mall" containing 4 roles with nicknames Mary, Alice, Liz and Evie. The scenes are organised so that when run it makes a random choice of running Malli or Ma112. Importation of the above script causes the window shown in Figure 9 to be displayed.
Importation of a script is a simple way to begin a new scenario. The play-like script format is straightforward and intuitive and once imported provides a complete but simple scenario. This format provides a basis for simplified scenario authoring, allowing users to create their own scenarios. A simplified authoring tool is described in more detail in the Simplified Authoring Tool section below. This type of scenario is simple in the sense that it is completely deterministic as regards which doll says what and when it speaks. For more sophisticated scenarios an imported scenario can be developed further with the authoring tool to add any of the supported features such as attributes, random alternative statements, numeric logic, conditional branching and the like. Alternatively, the use of keywords in the script allows the definition of sophisticated scenarios within the script; this is described in more detail in the Scripting Language section below.
Publish 305 -This allows interaction with the website to allow uploading of scenarios, voices and names. Upon selection of the Publish' menu item 305 the Publish' form 800 is displayed, as shown in Figure 10. The Publish' form 800 is described in more detail below.
Close 306 -This allows the currently active scenario to be closed.
Voices 307 -This allows operations relating to the definition a synthetic voice for each supported voice in order to be able to synthesise phrases. Upon selection of the Voices' menu item 307 the Voice Maintenance' form 820 is displayed, as shown in Figure 11. The Voice Maintenance' form 820 is described in more detail below.
Names 308 -This allows operations relating to the definition of supported names and corresponding name audio files. Upon selection of the Names' menu item 308 the Names Maintenance' form 810 is displayed, as shown in Figure 12. The Names Maintenance' form 810 is described in more detail below.
Stock Phrases 309 -This allows definition and editing of stock phrases. Upon selection of the Stock Phrases' menu item 309 the Stock Phrase Maintenance' form 830 is displayed, as shown in Figure 13. The Stock Phrase Maintenance' form 830 is described in more detail below.
Scripting language The scripting language and the commands supported by the authoring tool are now described in more detail.
The scripting language allows the complete specification of a scenario in a text form which can be imported into the authoring tool for further processing. The syntax of the imported script is checked during the import process. Importation is terminated on detection of a syntax error. Textual information is provided to detail the type and location of the error.
The scripting language provides an intuitive way to construct simple play-like scenarios, but it can also include more advanced features for the development of more complicated scenarios.
The scripting language consists of formatted statements of the following form: * Keyword: text -This is an active script clement * II comment_text This allows the inclusion of explanatory comments within the script. These are ignored during the import process.
Because they are used by the script syntax the characters & : ( ) [] should be avoided for uses other than giving the prescribed script syntax, in particular in
statement text.
The following keywords and scripting commands are available (as also described above with reference to the Irriport' 304 function): * Theme: theme_name (optional, no more than one entry per scenario) * Topic: topic_name (optional, no more than one entry per scenario) * Scenario: scenario_name (optional, no more than one entry per scenario)
* Description: description_text
* ID: scenario_ID, integer value (optional, no more than one entry per scenario) * Role: role name,personality traits this allows the definition of role personality traits (optional, multiple entries possible). As described above the personality value is composed of 8 personality traits, each having an integer value from 0 to 15; personality traits are input in the form traitl,trait2,trait3,trait4,trait5,trait6,trait7,trait8.
* Acts: act_scene_group, act_scene_group, ... -this command allows simple conversation branching (optional). A list of act specifications is separated by commas. An act specification is a list, enclosed in brackets, of scene names separated by commas. The scene names must be those introduced by the Scene:' keyword described below. This entry allows the -41 -grouping of scenes into acts so as to control the flow of the scenario. If the Acts' statement is omitted then the scenes run in the sequence in which the scenes are written. For example, a script has scenes "Starti", "Start2", "Start3", "Middlel", "Middle2", "Middle3", "Endi", "End2", "End3". The
statement:
Acts:(Startl,Start2,Start3),(Middlel,Middle2,Middle3),(Endl,End2,EndS) organises the script such that the resulting conversation makes a random choice of "Starti" or "Start2" or "Start3", followed by a random choice of "Middlel" or "Middle2" or "Middle3", followed by a random choice of "EndI" or "End2" or "End3". 27 (i.e. 3x3x3) different conversations are possible.
* Scene: scene_name (optional, multiple entries possible) allows the scripting of scene subunits within the scenario. Different sections of the scenario are appropriately labelled. If it is omitted the whole scenario is labelled "start, start:1, start:2 etc".
In the simplest form, as seen in the example above, the syntax Character: statement_text' allows entry of a statement to be spoken by a character. Phrases that make up the conversation content follow the syntax: Role Specifier: {modifiers} spoken_text custom_spoken_text This defines a context element, as used in the authoring tool once the script is imported. Generally multiple context elements together specify the complete behaviour of the scenario. When imported, context elements are generated, and each newly generated context element is labelled. The first context element after a Scene: scene_name' statement is labelled with scene_name, the second context element with scene_name:1, the following context element with scene_name:2 and so on.
Role Specifier: introduces the next active role. It can take the following values: * Me: -This selects the current active role as the next active role * NotMe: -This randomly selects any present role except the current active role as the next active role * Any: -This randomly selects any present role as the next active role * Prey: -This selects the previous active role as the next active role * Next: -This selects the next role to the current active role, in the order in which they joined the conversation, as the next active role. It selects the controller role after the last present role.
* Transition: -This selects the role indexed by the numeric value stored in an attribute called theme.Transition' as the next active role * Role_name: -This selects the role defined by role-name as the next active role. If role_name has not already been defined by a Role:' statement then it is implicitly defined with traits = 0,0,0,0,0,0,0,0 Modifiers (optional): contains advanced features used to modify the default behaviour of the scenario and allows for the complete control of the script logic, for example by setting the values of attributes, testing attribute values, changing the default behaviour of saying statements, branching and transitioning. This is described in more detail below.
Spoken_text and custom_spoken_text: defines the statements to be spoken.
The spoken_text part may be omitted if nothing needs to be said in the context element, but if it is omitted then so must any custom_spoken_text.
The custom_spoken_text part is optional and takes the form: (role_name) spoken_text. The custom_spoken_text part allows the specification of customised statements for the specified role only. The role should be defined before use in any custom_spoken_text. custom_spoken_text may be repeated in a context element for as many roles as required.
The spoken_text part can consist of a choice of statements and/or other
statement options as follows:
* Random choice of statement with I': Statementi I Statement2 I Statement3 etc. This defines a list of statements with equal weighting.
* Random weighted choice of statement with I': ml Statementi 1n2 Statement2 1n3 StatementS etc. This defines a list of statements where Statementi has weighting ni, Statement2 has weighting n2 etc. (where the ni, n2 etc. are the integer values of the weightings) * Phrase concatenation with &: Phrasel&Phrase2&Phrase3 etc. each statement itself may consist of concatenated phrases * Reference to attribute: Phrasel [attributel] Phrase2 [attribute2] etc. this allows a statement to contain phrases and references to attributes so that a doll can say variable things such as the name of the doll that is taking the part of a role, e.g. [me.Name], [prev.Name] or [next.Name].
As seen in the simple example script above play-like scenarios are constructed without any modifiers. The behaviour of some of the features of the example script above are: * Acts: (Malli,Ma112) -organises acti to consist of either Malli or Ma112; when run it makes a random choice of running Malli or Ma112 * Me: -in the fifth line of the example script an empty context element is used to ensure that Mary starts the scenario irrespective of who the controller is.
* Scene: Malli -defines scene Malli * Mary: Hi -[next. Name] Thanks for coming! -the context element is labelled as Malli; the following context elements Malll:1, Malll:2 etc. until
the next Scene:' statement.
When run: 1) the specified role says the associated statement or it chooses and says one of the statements if a choice is specified with the I' character; 2) control then passes to the next context element; and 3) the scenario ends after the last context element.
Modifiers Modifiers (optional) are used to change the default behaviour of the context elements. The syntax for using modifiers is: Role_specifier: {modifiers} spoken_text custom_spoken_text Modifiers can consist of one or more of the following elements, separated by commas where there is more than one modifier. The types of modifier are: * Set modifier -this is used to set values to attributes.
* Condition modifier -this is used to test the value of an attribute and to store a condition value which can be used to modify the subsequent behaviour of the scenario.
* Say modifier -this is used to modify the default speaking behaviour of the scenario.
* Branch modifier -this is used to modify the flow of the scenario.
* Transition modifier -this is used to modify the selection of the next speaker.
The different types of modifier are now described in more detail.
Set modifier -this is used to set values to attributes. It consists of the following text: Set<SetMode,SetList>.
SetMode is one of the following keywords: * Random -the selection of values is a random choice.
* Unique -the selection of values is a random choice with ensured uniqueness.
* Condition -the selection of values is based on the previous condition value.
SetList is a list of one or more SetStatments separated by commas. A
SetStatement consists of the following text:
Attribute' Assignment' [ValueChoice'].
* Attribute' is an attribute specification
* Assignment' is one of the following: o = the attribute is set to the choice of value o 2' the attribute is set to the choice of value if it is not already set o + the attribute has the choice of value added to its current value o -the attribute has the choice of value subtracted from its current value o "" the attribute has the choice of value multiplied by its current value o I' the attribute has its current value divided by the choice of value o & the attribute has the choice of value added to its current value modulo the number of active dolls.
* ValueChoice' consists of a list of one or more ValueSpecifiers separated by commas, where a ValueSpecifier consists of (weight,Value) where o weight is an integer used in making random choices and o Value can be an attribute, a phrase' or an integer value.
The following are some examples of Set modifiers:
Example 1:
{Set<Random, theme.dice=[(1,1).(1,2),(1,3),(1,4),(1,5),(1,6)], theme.dicephrase=[(1 0')], theme.dicephrase+[(1,theme.dice)] >} This causes first a random choice of a value between I and 6; then the chosen value is used to specify an appropriate phrase for the chosen value.
Example 2:
{SetcCondition, me.position=[(1,me.position),(1,theme.snakel bottom), (1,theme.snake2bottom),(1,theme.snake3bottom), (1,theme.snake4bottom),(1,theme.snake5bottom)] >} This causes the me.position attribute to be set to one of the listed values, depending on the current condition value (set by a forgoing condition modifier).
For example, if the condition value happens to be 4, then me.position is set to theme.snake4bottom. If the condition value happens to be 0, then me.position is set to me.position, that is, it remains the same.
Condition modifier -this is used to test the value of an attribute and to store a condition value which can be used to modify the subsequent behaviour of the scenario. It consists of the following text: lf<'Attribute' operator' [value list']>, where
* Attribute' is an attribute specification
* Value list' is a list of at least one value separated by commas; a value can be a phrase', an attribute or an integer value * Operator' is one of the following comparison operators: o = sets the condition value to the index of the first element in the value list which equals the specified attribute (elements are counted starting from 1) o < sets the condition value to the index of the first element in the value list which is greater than or equal to the specified attribute (elements are counted starting from 1) o > sets the condition value to the index of the first element in the value list which is less than or equal to the specified attribute (elements are counted starting from 1) o # sets the condition value to the index of the first element in the value list which is not equal to the specified attribute (elements are counted starting from 1) If no elements match the condition then the condition operator is set to zero.
The following is an example of a condition modifier: {lf<me.position=[theme.snakel top,theme.snake2top, theme.snake3top,theme.snake4top,theme.snake5top]>} This causes the condition value to be set depending on what the me.position attribute happens to be; for example, if me.position is currently theme.snake4top, then the condition value is set to 4.
Say modifier -this is used to modify the default speaking behaviour of the scenario. It consists of the following text: Sayc'SayMode','Timing'> where: * SayMode' is one of the following keywords: o Random -the selection of statements from the spoken text is a random choice.
o Condition -the selection of statements from the spoken text is based on the previous condition value.
* Timing' specifies the timing of the statement choice as follows: o 0 -the chosen statement is spoken when the current statement has completed.
o N -the chosen statement starts n/IOU of a second after the current
statement has started.
o -N -the chosen statement starts n/lOU of a second before the
current statement completes.
The default Say behaviour, if no Say modifier is present, is equivalent to Say<Random,0>.
Branch modifier -this is used to modify the flow of the scenario. It consists of the following text: BranchSpecifier'c'BranchMode',['BranchList']> where * BranchSpecifier' is one of the following keywords: o Goto -Specifies a branch to a label chosen from the BranchList, based on the BranchMode.
o GoSub -Specifies a branch to a subroutine chosen from the Branchlist, based on the BranchMode.
* BranchMode' is one of the following keywords: o Random -the selection of labels/subroutines is a random choice.
o SayChoice -the selection of labels/subroutines is based on the
choice made in selecting the statement to say.
o Condition -the selection of labels/subroutines is based on the existing condition value.
* BranchList' is a list of one or more LabelChoices separated by commas. A LabelChoice consists of the following text: (weight','label') where o weightw is an integer used in making random choices and o label' is a valid label of a context element or one of the following keywords: -Return -specifies a return from a subroutine.
-End -specifies the termination of the scenario.
For a Goto branch the BranchList is as follows: [LabelChoiceO,LabelChoicel,LabelChoice2,etc.].
For a GoSub branch the BranchList is as follows: [Return Label,LabelChoiceO,LabelChoicel,LabelChoice2,etc.].
Transition modifier -this is used to modify the selection of the next speaker. It consists of one of the following keywords: * Me -selects the current active role as the next active role * NotMe-randomly selects any present role but the current active role as the next active role * Any -randomly selects any present role as the next active role * Prey -selects the previous active role as the next active role * Next -selects the next role to the current active role, in the order in which they joined the conversation, as the next active role. The controller role is selected after the last present role.
* Transition -selects the role indexed by the numeric value stored in the attribute called theme.Transition' as the next active role.
* Role Name -selects the role defined by Role Name as the next active role.
The default transition behaviour is to use the Role_specifier of the following context element. However, if a branch modifier has been used then the flow does not necessarily continue to the following context element, so it is useful in this circumstance to be able to explicitly set the transition using a transition modifier.
When used together the modifiers must be enclosed in {} and separated by commas.
The following is an example of a Set modifier and a condition modifier used together: {SetcRandom,theme.dollcount+[(1,1)]>, lfctheme.dollcount>[dolls.Count]>, GoTocCondition,[(1,setup:4),(1,main)]>} This causes the dollcount attribute to be increased by one, the condition value to be set in dependence on the dollcount attribute, and then proceed to a context entry (setup:4 or main) in dependence on the condition value.
An example of a script with modifiers and custom_spoken_text that produces a scenario of playing snakes and ladders is presented in Appendix B. When imported this script produces a full snakes and ladders playing scenario for up to 6 roles.
Theme development The development features provided in the authoring tool are now described in more detail.
Defining scenario data: the Theme' form 400 is shown in Figures 8 (blank version of form) and 9 (form populated with data). The Theme' form 400 is used to define, edit and/or remove scenario data.
Defining the theme: to enter a theme the appropriate text is entered in the
Theme' 401 text field.
Defining the topic: to enter a topic the appropriate text is entered in the Topic'
402 text field.
Defining the scenario ID: to enter a scenario ID (which must be a locally unique integer in the range 1 to OxfftfflTf) the locally unique number is entered in the
Scenario ID' 403 text field.
Defining the scenario name: to enter a scenario name the appropriate text is
entered in the Scenario Name' 404 text field.
Defining the scenario description: to enter a scenario description the appropriate text is entered in the Scenario Description' 405 text field.
Defining theme attributes: the Theme Attribute' section 406 of the Theme' form 400. Theme attributes are variables defined at the scenario level. Theme attributes can be used to store information such as phrases or numeric values to aid control over the scenario flow.
* There are two pre-set theme attributes for every scenario: Name' and Transition'. These two pre-set theme attributes have special functions, as described in more detail below. User defined theme attributes may be added or removed as follows: * Adding theme attributes: to add a theme attribute the Add' button 407 is selected. This results in the Add Attribute' form 406a shown in Figure 14 being displayed. By selecting in the Attribute Name' text field 406b the name of the attribute is entered. By then selecting the Done' button 406c the new name is displayed in the Theme Attribute' list 408. It is not necessary to define the theme attributes at this stage as they can be defined as needed during the context event entries.
* Removing theme attributes: the attribute to be removed is selected in the Theme Attribute' list 408. Then the Remove' button 409 is selected.
Defining Phrases: the Phrases' section 410 of the Theme' form 400. Phrases may be defined here at this stage or they may be defined as needed during the context event entries. One advantage of entering phrases at this stage is that it allows control over the order of the phrases. The order of the phrases determines the index used to reference the phrase in the simulator and in the actual doll.
Some advanced features require that some phrases must be in a certain order.
For example if counting is required in a conversation it is helpful to have the phrases for the integers 1, 2, 3 etc to be in the order N,N+1,N+2, etc Adding phrases: to add phrases the Add' button 411 is selected. This results in the Add Phrases' form 410a shown in Figure 16 being displayed. Phrases may be entered as text with each phrase terminated by a carriage return character. When all the required phrases have been entered the Done' button 41Db is selected. The phrases then are displayed appended to the Phrases' list 414.
* Editing phrases: To edit phrases the Edit' button 412 is selected. This result in the Edit Phrases' form 412a shown in Figure 17 being displayed.
Phrases may be edited as required. When all the required phrases have boon edited the Done' button 412b is selected. The phrases then are displayed appended to the Phrases' list 414.
* Removing phrases: To remove a phrase the phrase is selected in the Phrases' list 414 and the Remove' button 413 is selected.
* Defining the Golden Phrase: To define the Golden Phrase the pull-down arrow is selected in the Golden Phrase' 415 field, resulting in display of a list of all available phrases and allowing selection of the phrase desired for the Golden Phrase.
Defining roles: the Roles' section 429 of the Theme' form 400. Roles are like the virtual characters in a play. All scenarios are developed around a set of characters (roles), where each character plays the pad of a defined role in the scenario. Each role can be given a nickname and a representative personality.
* Adding a role: to add a role the Add' button 417 is selected and the Add Role' form 417a, shown in Figure 18, is displayed. The Description' text box 417b contains a new default role number. The default role number may be edited to give the new role a nickname; eligible role names are not only numbers, but also text -any name can be used. Defining role nicknames is helpful for a scenario developer as it makes it easier to remember the personality of a named role. The new role can also have a personality set by setting values for each of 8 personality traits 417c. The personality trait values can range from 0 to 15. Each trait represents a different aspect of personality, e.g. introvert/extrovert, funny/serious, talkative/thoughtful etc. Once the role data has been entered the Done' button 417d is selected.
Editing a role: to edit a role the role to be edited is selected in the Roles' list 416 and the Edit' button 418 is selected. The Edit Role' form is displayed. The Edit Role' form displays the same features as the Add Role' form 417a, shown in Figure 18. The role's nickname is displayed in the Description' field 417b. The role's personality traits 417c are also displayed. The role name as well as the role personality traits may be edited. The Done' button 417d is selected when all changes have been made.
* Removing a role: to remove a role the role is selected in the Roles' list 416 and the Remove' button 419 is selected.
Defining role attributes: the Role Attributes' section 430 of the Theme' form 400. Role attributes are variables defined at the role level. They can be used to store information such as phrases or numeric values to aid the control of the scenario flow.
* Each role has one pre-set attribute: Name'. This attribute contains a reference to the name of the doll playing this role in a particular instance of the scenario.
* Adding a role attribute: to add a role attribute the Add' button 421 is selected. The Add Attribute' form 406a as shown in Figure 14 is displayed. The name of the new role attribute is entered in the Attribute Name' text field 406b and the Done' button 406c is selected.
* Removing a role attribute: To remove a role attribute the attribute is selected in the Role Attributes' list 420 and the Remove' button 422 is selected.
* Editing a context list: The actual control of the conversation is handled in the role context entries. Role context entries are accessed with the Edit Context for Role' field 423. Upon selection of the drop-down arrow, a list of available roles is displayed; upon selection of one of the available roles the so-called context list 500 for that role is displayed, as shown in Figure 15. The context list 500 is described in more detail below.
Saving a scenario: to save a scenario the "Save" button 424 on the Theme' form 400 is selected. The scenario must have a valid ID before it can be saved.
Compiling a scenario: to compile a scenario the "Compile" button 425 on the Theme' form 400 is selected. The scenario must have a valid ID before it can be compiled. The compiler conducts several checks on the context entries before performing the actual compilation, including checks to ensure: * all label references are satisfied * all attribute references have been defined * all phrase references have been defined * all branch statements have the required minimum number of values If any of the above checks fail then an error form is displayed and compilation is aborted. If the compilation succeeds then a "scenario name".bin file is produced which contains the compiled scenario without any audio data, which is suitable for testing in the simulation tool as described below.
Creating audio data: to create synthesised audio for the scenario the "Create Audio" button 426 on the Theme' form 400 is selected. This synthesises audio data for all the scenario phrases in each supported voice. Each supported voice must have an associated synthetic voice defined. For audio synthesis conventional audio synthesis software can be used.
Importing audio data: to import recorded audio data for the scenario the "Import Audio" button 427 on the Theme' form 400 is selected. This opens the Import Audio' form 427a shown in Figure 19. The Import Audio' form 427a facilitates the importing of recorded audio data for the specified voice from a folder specified by the user in the From Folder' field 427b. The user specified folder must contain the audio files A00006.wav upwards, which represent the audio rendering of each phrase specified in the scenario and listed in the phrases.txt file.
Compiling a scenario with audio data: When testing in the absence of audio data is complete and after the audio data has been either created or imported from recordings and tested in the simulator then the scenario can be compiled with audio data by selecting the "Compile + Audio" button 428. This runs the compiler as described above but producing a "scenario name_v".bin file. Then it checks that all the audio files defined in the scenario are present for each defined voice. It then compresses all the audio data producing a "audio_vn.bin" file for each voice and then appends the compressed audio data for each voice to the scenario name_v".bin file producing a "scenario name_vn".bin file for each voice n. These files are then available for uploading to the website for subsequent delivery to the dolls.
Context List The context list for each role is the way in which the logic flow of the scenario is defined. A context list is accessed with the Edit Context for Role' field 423 of the Theme' form 400, as described above. A list of available roles is displayed; upon selection of one of the available roles the context list 500 for that role 509 is displayed, as shown in Figure 15.
By default when a new context entry is entered for any one role it is propagated to all other roles. So by default all roles start with exactly the same entries in their context lists. This is so that scenarios may still function even though some roles are not filled in a particular situation. In this case the existing role players take the part of any missing role at random as required during the flow through the scenario. It is therefore essential that all roles have context entries for the whole scenario even though they are ordinarily not intended to be involved at a particular context entry. For the same reason it is important that all phrases are recorded in all supported voices even though a particular role may not ordinarily be intended to speak a particular phrase.
Given a role with a context list that is populated with context event entries, it is possible to edit the existing context entries. In particular it is possible to program a different response for each role for the same context event entry, thus producing much more interesting and varied results.
The context list 500 consists of a series of context events 507 508. A context event can have entries in one, some, or all of the following fields (corresponding to column headings): * Label 501 -This is text which defines a label for the context entry. This is mainly used in the branch field. If available the Scene' name entry is used as the base of the labels for the appropriate entries. If no Scene' name entry is specified then Start' is the base text of the labels.
* Set Attributes 502 -This allows a set of attributes (theme or role) to be allocated values according to various rules.
* Conditions 503 -This allows an attribute (theme or role) to be tested for various values and for the resulting condition to influence subsequent actions.
* Say 504 -This is where a choice of statements to be spoke can be specified.
* Branch 505 -This is where the choice of a branch to a new context entry can be specified.
* Transition 506 -This is where the next active doll is chosen.
The function of the different fields of the context events is now described in more detail.
Labels 501: when a new theme is started the context entry for each role begins with the label Start' 507. As new context entries are added they automatically get the labels Start:1', Start:2', etc. The final context entry in the list has the label End'. At any point, for example at label Start:10', it is possible to change the label. This enables sections to have meaningful labels throughout the theme. If the label Start:10' is changed to NewLabel' then as subsequent context entries are added they automatically receive the labels NewLabel:1', NewLabel:2', etc. * Changing a label: to change a label the Label' field at the appropriate entry is double-clicked, whereupon the Set Label' Form 501a shown in Figure 20 is displayed. The new label is entered in the New Label' text field bUlb and the Save Label' button SOIc is selected.
Set Attributes 502: the Set Attributes' field for each context entry allows the storing of values to any of the defined attributes. It is also possible to add new attributes at this stage. Attributes may be added at the theme or role level as required. To add, edit, or delete set attributes the Set Attributes' field of the chosen context entry is double-clicked, whereupon the Set Attributes' Form 502a shown in Figure 21 is displayed.
Selecting attribute role/theme: the role (or theme) to which the attribute relates is selected by means of the drop-down arrow in the first text box 502c in the Attribute' section 502b. In the example scenario illustrated in Figure 9 the following roles (or theme) attributes are available to set: o theme' -this indicates that a theme attribute is set o me' -this indicates that a role attribute belonging to the current active role is set.
o prey' -this indicates that a role attribute belonging to the previous active role is set.
o next' -this indicates that a role attribute belonging to the next active role is set.
o each' -this indicates that the attribute for each present role is set.
o all' -this indicates that the attribute for all the roles is set.
o Mary' -this indicates that a role attribute belonging to the role nicknamed Mary is set.
o Alice' -this indicates that a role attribute belonging to the role nicknamed Alice is set.
o Liz' -this indicates that a role attribute belonging to the role nicknamed Liz is set.
o Evie' -this indicates that a role attribute belonging to the role nicknamed Evie is set.
In an illustrative example me' is selected.
* Selecting attribute: the attribute is selected by means of the drop-down arrow in the second text box 502d in the Attribute' section 502b. A list containing the currently available attributes for the role/theme selection is shown. The currently available attributes may include only pre-set attributes such as the Name' attribute. If it is desirable to create a new attribute the name of the desired attribute can be typed into the second text box 502d. In an illustrative example a role attribute called pet' is entered. When entry of the set attribute data is completed a new role attribute called pet' is added to the Role Attributes' list 420 in the Theme' form 400.
* Selecting assignment operator: the assignment operator is selected by means of the drop-down arrow in the Assignment' text box 502e. The following assignment operators are available to set: o 1' sets the value of the attribute to the new value if it is not already set.
o = sets the value of the attribute to the new value irrespective of its current value.
o + adds the new value to the current value of the attribute.
o -subtracts the new value from the current value of the attribute.
o multiplies the current value of the attribute by the new value.
o I' divides the current value of the attribute by the new value.
o & adds the new value to the current value of the attribute modulo the number of present dolls.
In an illustrative example =1 is selected.
Selecting assignment value: the new value is defined in the Value' section 502m of the Set Attributes' Form 502a. The assignment value type is selected by means of the drop-down arrow in the Value' text box 502f. The following assignment value types can be selected: o phrase for a phrase reference.
o attribute for a new value that is the same as another attribute.
o numeric for an arithmetic value in the range 0 to Oxiff.
o dolls for information concerning the present dolls.
o null for the null reference (0x5000).
Once the value type is specified, the value is entered by means of a form particular to each type of value. The entry of different types of values is described in more detail below. A weight can be associated with a value by means of the Weight' field 502i. After the value has been entered it is shown in the Values' list 502g. As shown in Figure 21, more than one value can be entered; in the example, the values dog', cat', bunnie', and gerbil' have been entered.
* Selecting assignment method: if more than one value has been entered, then one of them is chosen for assignment. The method of choosing is selected by means of the drop-down arrow in the Method' text box 502h.
The following methods are available to set: o Condition: the choice of values for all the set attribute entries is based on the previous context entry's conditions. This is described in more detail below, but briefly the conditions sets a condition value as follows: * 0 if none of the conditions are met * 1 if the first condition is met * 2 if the second condition is met * ...andsoon If the method is set to Condition then the choice of value is the first value if condition = 0, the second value if condition = 1, and so on.
The weights are ignored by the condition method.
o Random: the choice of values for all the set attribute entries is the weighted random choice from the value list provided for each set attribute entry. The default weight value is I (causing all values to be equally weighted by default) and can be changed with the
Weight' field 502i.
o Unique: the choice of values for all the set attribute entries are randomly selected as above, but are constrained so that each role receives a unique value.
Adding set attribute: once the list of values with weights in the Values' list 502g is complete and the required data has been specified, the Add Set Attribute" button 502j in the Set Attributes section is selected to add the set attribute. The new set attribute is shown in the Set Attributes' list 502k, and the upper part of the form is returned to its default without previous entries being shown. In the illustrated example the new set attribute entry is: "me.pet=[(1,dog'),(l,cat'),(l,bunnie'),(l,gerbil'),]" meaning that me.pet receives an assignment by random choice from dog', cat', bunnie' or gerbil'.
* Saving set attribute: once all set attribute entries have been made the Save Set Attributes' button 5021 is selected. The display then returns to the context entry list 500 where the updated set attributes entries are shown. Figure 22 shows for the illustrative example described above the updated Set Attributes' field 510 for the context entry.
Conditions 503: the Conditions' field 503 in the context entry list enables testing of the value of a chosen attribute by comparing it in various ways to a set of other values. This allows changes to the subsequent flow of the theme in dependence on the value of any attribute. When a context entry is processed the Conditions' field causes a condition variable to be set as follows: 0 if no conditions are met I if the first condition is met 2 if the second condition is met and no previous condition is met and so on.
The value of the condition variable can be used in the Say' field, in the Branch' field and in subsequent Set Attribute' fields to determine the outcome. To add, edit, or delete conditions the Conditions' field of the chosen context entry is double-clicked, whereupon the Conditions' form 503a shown in Figure 23 is displayed.
Selecting condition attribute: an attribute is specified using the drop-down lists and the text boxes in the Attribute' section 503b, as per the Set Attribute' form 502a described above.
If the each' type attribute has been selected then the attribute for each present role is tested against the first value specified and the condition value is set as follows: o -if the comparison fails for every present role.
1 -if the comparison succeeds for roleo.
2-if the comparison succeeds for rolel and so on.
If the all' type attribute has been selected then the attribute for each role, whether present or not, is tested against the first value specified and the condition value is set as follows: 0-if the comparison fails for every role.
1 -if the comparison succeeds for roleo.
2-if the comparison succeeds for rolel and so on.
* Selecting condition type: the assignment method is selected by means of the drop-down arrow in the Assignment' text box 50Sf The following assignment methods are available to set: o = means that the selected attribute is tested for equality against the list of values.
o # means that the selected attribute is tested for inequality against the list of values.
o c' means that the selected attribute is tested as being less than the list of values.
o > means that the selected attribute is tested as being greater than the list of values.
The list of values used for comparison is built by means of selecting the value drop-down arrow.
Selecting condition value: the condition value type is selected by means of the drop-down arrow in the Value' text box 503c. The following condition value types can be selected: o phrase for a phrase reference.
o attribute for a new value that is the same as another attribute.
o numeric for an arithmetic value in the range 0 to Oxiff.
o dolls for information concerning the present dolls.
Once the condition value type is specified, the condition value is entered by means of a form particular to each type of value, same as selection of a value in the Value' section 502m of the Set Attribute' form 502a described above. After the condition value has been entered it is shown in the Values' list 5USd.
* Saving condition: when all condition data has been entered the Save Condition' button 503e is selected. The display then returns to the context entry list 500 where the updated condition is shown.
Say 504: the Say' field 504 in the context entry list 500 enables specification of a list of statements that may be spoken by the active doll at a specific context entry.
To add or edit statements the Say' field of the chosen context entry is double-clicked, whereupon the Say' form 504a shown in Figure 24 is displayed.
* Selecting statement method: a method is specified according to which a particular statement to be spoken is selected from a list of statements. The available methods are Random' and Condition'. The method is chosen by means of the Method' drop-down arrow 504b. If the method is Random' then the statement is selected as a weighted random choice from the list of statements. If the method is Condition' then the statement is selected according to the condition variable set by the Conditions' field. Weight values can be allocated to phrases with the Weight' field 504c.
* Selecting statement phrases: Each statement can consist of a list of phrases. Phrases are added to a statement by selecting the Phrase' drop-down arrow 504d.The following phrase types can be selected: o phrase for a phrase o attribute for an indirectly reference to a phrase that has previously been stored in an attribute.
o numeric for specification of a period of silence as a phrase. A value of n means a silence of n/lOU seconds.
Once the phrase type is specified, the phrase is entered by means of a form particular to each phrase type. The entry of different phrase types is described in more detail below (the same entry forms are used as for specifying an assignment value in dependence on the value type 502f in the Set Attribute' form 502a described above). After the phrase has been entered it is shown in the Values' list 504e. Further phrases may be entered by the same procedure. When all the phrases needed for a particular statement have been entered then the Add Statement' button 504f may be selected to add the statement (with the weight if applicable) into the Statements' list 504i. The above may be repeated to add more
statements to the Statements' list 504i.
Selecting statement timing: control of the timing of spoken statements is enabled by the Timings' field 504g. A timing value of 0 indicates a follow-on event, wherein the spoken statement starts when the previous spoken statement finishes. A positive timing value of n means that the spoken statement starts n/lOU seconds after the previous spoken statement starts. A negative timing value of n means that the spoken statement starts nIl 00 seconds before the previous spoken statement finishes.
* Saving statement: when all statements and statement data have been entered the Save Statement' button 504h is selected. The display then returns to the context entry list 500 where the updated Say' data is shown.
Branch 505: the Branch' field 505 in the context entry list 500 allows specification of a list of possible context entries for the scenario control to pass to.
For ease of analysing the scenario flow a branch highlighting function is provided.
Upon selection of a context entry all other context entries that correspond to each of the defined branch labels are highlighted. This is useful for finding the location -61 -of the target of branches when editing a complete scenario. To add or edit branch data the Branch' field 505 of the chosen context entry is double-clicked, whereupon the Branch' form 505a shown in Figure 25 is displayed.
* Branch type selection: the branch type is selected by means of the drop-down arrow in the Branch Type' field 505b. There are two types of branch: o Goto: the Scenario flow jumps to a new location (chosen from the list of locations) based on the chosen method.
o Gosub: the Scenario flow remembers a return location, specified by the first location in the list, and jumps to a new location (chosen from the remaining locations in the list of locations) based on the chosen method.
* Branch method selection: the branch method is selected by means of the drop-down arrow in the Branch Method' field 505c. There are three methods for choosing a new location from the list of locations: o Random: the new location is chosen as a weighted random choice from the available list of locations (the complete list for Goto, the complete list except the first entry for Gosub). Weight values can be allocated to branches with the Weight' field 505d.
o SayChoice: the new location is chosen based on the choice made for the Say' field 504 in the context entry list 500. For example if the Say' field 504 has selected the third statement of its statements list, then the new location to which the conversation is branched is the third entry in the location list for a Goto or the fourth (3+1) entry for a Gosub.
o Condition: the new location is chosen based on the condition variable as set by the Conditions' field 503 in the context entry list 500. For example if the conditions variable = n then the new location is the (n+1)th entry in the list for a Goto or (n+2)th list entry for a Gosub.
* Branch location selection: possible locations to which the branching may occur are selected by means of the drop-down arrow of the Branch' field 505e. All available existing labels are subsequently shown for selection in the drop-down list. Also, a Return' location is available for selection; this causes the branch to pass (back) to the return location stored when a Gosub branch is active. Further, it is possible to add a new label by entering it into the text box of the Branch' field 505e, however the appropriately labelled Context entry must be added manually (the compiler otherwise provides warnings of any unsatisfied branch locations at compilation time). After the branch information has been entered the Add Branch' button 50Sf is selected whereupon the new branch is shown in the Values' list 505h.
Saving branch: when the complete list of possible branch locations has been entered and all branch data has been specified the Save Branch' button 505g is selected. The display then returns to the context entry list 500 where the updated branch data is shown in the Branch' field 505.
Transition 506: the transition field 506 in the context entry list 500 allows specification of how the next active doll is chosen. To add or edit a transition the Transition' field of the chosen context entry is double-clicked, whereupon the Transition' form 506a shown in Figure 26 is displayed. The following choices are available: o Me: the next active doll remains the same as the current active doll.
o Prey: the next active doll is the same as the previous active doll.
o NotMe: the next active doll is chosen at random from all present dolls except the current active doll (Me).
o Any: the next active doll is chosen at random from all present dolls.
o Next: the next active doll is the doll with the next index in the list of present dolls. When the end of the list of present dolls is reached it starts again from the beginning.
o Transition: the next active doll is the doll corresponding to the index into the list of present dolls stored in the Theme.transition attribute.
o (in the example illustrated above) Mary: the next active doll is the doll playing the role nicknamed Mary.
o (in the example illustrated above) Alice: the next active doll is the doll playing the role nicknamed Alice.
o (in the example illustrated above) Liz: the next active doll is the doll playing the role nicknamed Liz.
o (in the example illustrated above) Evie: the next active doll is the doll playing the role nicknamed Evie.
Once a selection has been made the transition is automatically saved in the
Transition' field 506 of the context entry.
In the simulations and in the actual doll the fields of the context event entries are processed in the following order: Transition, Set Attributes, Conditions, Say, and Branch.
In the exemplary context list 500 shown in Figure 15 the first context entry 507, labelled Start', is processed as follows: * Transition -Mary -This chooses the role nicknamed Mary as the next active doll.
* Set Attributes -null -No action.
* Conditions -null -No action.
* Say -null -No action.
* Branch -GotocRandom,[(1,Malll),(1,Ma112)]> -This means that control passes to the context entry labelled Malli' or Ma112' chosen randomly according to the list of weights and labels inside the bracketed expression.
The result of processing context entry Start' 507 is to select the role nicknamed Mary as the active doll and then branch to context entry labelled either Malli or Ma112. The purpose of this context entry Start' 507 is to make sure that the conversation always begins with the role nicknamed Mary. In the real situation with actual dolls the controller doll starts the conversation and any of the dolls may be the controller -and the controller doll may assume another role than the role nicknamed Mary. This first entry, where no say statement occurs, silently changes the active doll (the default active doll at conversation initialisation being the controller doll) to the role nicknamed Mary, regardless of which role the controller doll has assumed.
In the above example the second context entry 508, labelled Malli', is processed as follows: * Transition -Alice -This chooses the role nicknamed Alice as the next active doll.
* Set Attributes -null -No action.
* Conditions -null -No action.
* Say -Say<Random,0,[(1," hi -",next.Name," thanks for coming!")]> -This means that the statement consisting of the phrases "hi -, the next active doll's name (in this case the name of the doll taking the role nicknamed Alice), "thanks for coming!" is spoken by the doll taking the pad of the role nicknamed Mary. The 0, after the word Random indicates that this is a "Follow-On" event so that the statement is spoken at the end of any
previous statement.
* Branch -GotocRandom,[(1,Malll:1)]> -This means that control passes to the context entry labelled MalIl:1.
The result of processing context entry Malli' is to: 1. select the doll playing the role nicknamed Alice as the next active doll, 2. use the doll playing the role nicknamed Mary to speak the statement" hi - "The name of the doll playing the role nicknamed Alice," thanks for coming!", 3. branch to context entry labelled MalIl:1.
The process continues processing the appropriate context entries for each active doll at the chosen labels until it reaches the end.
Context List Menu The context list form 500 as shown in Figure 15 provides a menu 511 with various operations pertaining to the context list 500. The following operations are available:
* Clear Field: replaces a chosen field with null
* Copy Field to All Roles: copies the data in a chosen field to the equivalent
field in all the other roles' context list.
* Insert Row: inserts a new row at the chosen location.
* Remove Row: removes a selected row.
* Copy Selected Rows: makes a copy of selected rows in a clipboard.
* Cut Selected Rows: makes a copy of selected rows in a clipboard and remove them from the context list.
* Paste Rows: inserts the rows on the clipboard into the context list at the chosen location.
* Find text in cells: opens a search form that allows the searching of the context entries for any chosen text. The search may be specified as progressing forwards or backwards through the list starting from the currently selected cell. This operation can be useful for editing completed scenarios.
Input forms The input forms for the different types of entries (phrase, attribute, numeric, dolls, null) arc now described in more detail.
Input form: phrase Phrases are added with the Choose Phrase' form, as shown in Figure 27.
Selecting the drop-down arrow in the Phrase' field 600 causes a drop-down list to be shown with the already defined phrases. Instead of selecting one of these, a new phrase may be entering by typing the phrase into the text box of the Phrase' field 600 and then selecting a Save Phrase Choice' button (not shown).
Input form: attributes Attributes are added with the Choose Attribute' form, as seen in Figure 28.
Selecting the drop-down arrow in the first Attribute' field 601 causes a drop-down list to be shown with a list of available roles/themes: * Theme: a theme attribute value is used * Me: a role attribute value belonging to the current active role is used * Prey: a role attribute value belonging to the previous active role is used * Next: a role attribute value belonging to the next active role is used * (in the example illustrated above) Mary: a role attribute value belonging to the role nicknamed Mary is used * (in the example illustrated above) Alice: a role attribute value belonging to the role nicknamed Alice is used * (in the example illustrated above) Liz: a role attribute value belonging to the role nicknamed Liz is used * (in the example illustrated above) Evie: a role attribute value belonging to the role nicknamed Evie is used Selecting the drop-down arrow in the second Attribute' field 602 causes a drop-down list to be shown with a list of available attributes for the selected role/theme.
As an example if in the first Attribute' text box 601 Theme' is selected, the second Attribute' text box 602 shows the following choices:-* Name * Transition * Test These three attributes are the existing defined theme attributes in this example.
Instead of selecting attributes from a drop-down list, a new attribute may be entering by typing the attribute name into the text box of the second Attribute'
field6O2.
When the desired theme attribute has been specified the "Save Attribute Choice" button 603 is selected, causing the Choose Attribute' form to close and returns to the previous form where the chosen attribute is now listed.
Input form: numeric values Numeric Values are added with the Choose Numeric' form, as shown in Figure 29. Numeric values in the range 0 to Oxfff can be entered in or selected with the arrows of the Set Value' field 604. When the desired value has been specified, selecting the "Save Value" button 605 causes the Choose Numeric' form to close and returns to the previous form where the chosen numeric values is now listed.
Input form: doll data Information relating to present dolls is added with the Choose Dolls Data' form, as shown in Figure 30. The Choose Dolls Data' field 606 provides a list of doll information, such as: * Count: a numeric value equal to the number of present dolls.
* Me: a numeric value equal to the index of the current active doll.
* Prey: a numeric value equal to the index of the previous active doll.
* Next: a numeric value equal to the index of the next active doll.
When the desired doll data has been specified, selecting a "Save Dolls Data" button (not shown) causes the Choose Dolls Data' form to close and returns to the previous form where the chosen doll data is now listed.
Theme testing Once a scenario has been compiled it can be tested on a computing device. This can be done both before and after audio data is available. If testing is done before any audio data is available then the test simulator prints (or otherwise displays) the text that would be spoken. If testing is done after audio data is made available then the test simulators speak the phrases as well as printing/displaying the text.
Testing a scenario (after compiling) starts in the main Theme Development' window 300 shown in Figure 7. Selecting the drop down arrow of the Choose a Toy to Start -ID' text box 311 in the Theme -testing' section 310 causes a list of doll ID numbers to be displayed. The number of ID entries in the list is the same as the number of roles defined in the scenario.
The doll IDs displayed refer to the Dolls folder in the Theme Data Directory.
There are sub-folders for each Doll ID and these sub-folders each contain a doll data file named MyData.txt'. The doll data file contains the following data: * The doll's name in text e.g. Allison * The audio reference to the doll's name as a hex string e.g. 6002 * The doll's personality as a hex string e.g. 1 * The name of the doll's voice in text e.g. Voicel * The count of the number of conversations entered e.g. 450 The doll data file allows the simulator connected to a chosen doll ID to be able to display meaningful information.
If Doll ID 00000001 is selected then in an illustrative example the simulation window shown in Figure 31 is displayed. This example shows a simulation window for a doll called Allison, and as this is the first doll started it is the Controller, as also displayed in Figure 31. Also shown is the fact that Allison has taken role 0 and her name audio reference number is 6002. Each entry is time stamped from the start of the particular simulator.
If now Doll ID 00000002 is selected then in the illustrative example the simulation window shown in Figure 32 is displayed. This example shows a simulation window for a doll called Courtney, and as this is not the first doll started it is the client, as also displayed in Figure 32. Also shown is the fact that Courtney successfully takes role I and her name audio reference number is 6003.
When a second simulator is started, a line in the controller simulation is added indicating that an active doll has joined the conversation and has taken role 1 and its name audio reference is 6003.
Further doll simulations can be added up to the total number of roles supported in the scenario. The scenario can also be simulated with only some roles taken. The conversation can be started and dolls added while it is running; dolls can also be removed while the simulation is running. This allows a thorough test of all the events that might happen with real dolls.
The conversation is started by selecting the "Start" button 701 in any of the simulation windows.
The conversation proceeds and the controller window (Allison in our example) displays text as shown in Figure 33. The conversation start is indicated with an entry "Chat mit" 702. Diagnostic messages 703 follow, in the illustrated example pending phrase data and duration data. The statements 704 follow, including display of information regarding what is spoken and who speaks it.
The client simulation (Courtney) displays an output similar to the output on the Controller simulation, but only shows the text for what Courtney says.
The conversation may be paused and continued by selecting the "Start" button 701. It can be stopped by selecting the "Stop" button 705 and re-started by selecting the "Start" button 701.
A client doll may be removed from the conversation by selecting the "Exit" button 706 in its simulation window. If the controller "Exit" button is selected then the test is terminated and all simulation windows are closed.
After a scenario is tested log files of the simulation output for each doll are stored in the Theme folder for subsequent study.
This simulation testing allows rapid testing of a scenario followed by direct editing to correct any problems followed by re-testing until the scenario produces the desired results.
Theme publishing This allows interaction with the wobsite to allow uploading of scenarios, voices and names. When the Publish' menu item 305 in the main Theme Development' window 300 shown in Figure 7 (as described above with reference to Figure 7) is selected the Publish' form 800, as seen in Figure 10, is displayed. The Publish' form contains the following controls: * Scenarios 801: Lists the Themes/Topics/Scenarios stored on the website.
* Voices 805: Lists the Voices stored on the website.
* Names 806: Lists the Names stored on the website.
* Upload Scenario 807: Uploads the currently opened scenario to the webs i te.
* Upload Voice 808: Uploads the selected voice to the website.
* Upload Name 809: Uploads the selected name to the website.
These functions are described in more detail below.
Scenarios When the Scenarios' button 801 is selected a request is sent to the website to retrieve all Theme/Topic/Scenario information. If a connection to the website has not already been made then a login prompt is displayed first.
Once retrieved, the Publish' form shows the Theme/Topic/Scenario information, as shown in Figure 34. The Theme/Topic/Scenario information is displayed in a tree structure with the following hierarchy: * theme name (ThemelD) o topic name (TopiclD) * scenario data * scenario name
* scenario description
* Golden Phrase * whether the scenario is in test mode Scenarios, topics and themes may be edited or deleted by first selecting the required theme, topic or scenario name, then selecting the Delete' button 802 or the Edit' button 803.
Editing a Scenario: when the Edit' button 803 is selected with a scenario selected the Edit Scenario Text' form shown in Figure 35 is displayed. Fields are provided that allow modification of the name, description, topiclD and/or development status of the scenario. When the OK' button is selected changes are sent to the website and the display updated accordingly.
Deleting a Scenario: when the Delete' button 802 is selected with a scenario selected then the scenario is deleted from the website.
Editing a Topic: when the Edit' button 803 is selected with a topic selected the Edit Topic Text' form seen in Figure 36 is displayed. Fields are provided that allow modification of the name, description and/or themelD of the topic. When the OK' button is selected any changes are sent to the website and the display updated accordingly.
Deleting a Topic: when the Delete' button 802 is selected with a topic selected then the topic is deleted from the website, provided that it contains no scenarios.
If it contains scenarios a message is displayed informing the user to delete the scenarios before deleting the topic.
Editing a Theme: when the Edit' button 803 is selected with a theme selected the Edit Theme Text' form seen in Figure 37 is displayed. Fields are provided that allow modification of the name and/or description of the theme. When the OK' button is selected any changes are sent to the website and the display updated accordingly.
Deleting a Theme: when the Delete' button 802 is selected with a theme selected then the theme is deleted from the website, provided that it contains no topics. If it contains topics a message is displayed informing the user to delete the topics before deleting the theme.
Voices When the Voices' button 805 is selected a request is sent to the website to recover all the voice information. If a connection to the website has not already been made then a login prompt is displayed. The voice information is displayed in the Publish' form as illustrated in Figure 38.
Editing a Voice: when the Edit' button 803 is selected with a voice specified the Edit Voice Text' form shown in Figure 39 is displayed. A field is provided that allows modification of the voice description. When the OK' button is selected any changes are sent to the website and the display updated accordingly.
Deleting a Voice: when the Delete' button 802 is selected with a voice specified then the voice is deleted from the web-site. Names
When the Names' button 806 is selected a request is sent to the website to recover all the name information. If a connection to the website has not already been made then a login prompt is displayed. The name information is displayed in the Publish' form as illustrated in Figure 40.
Editing a Name: when the Edit' button 803 is selected with a voice specified the Edit Name Text' form shown in Figure 41 is displayed. A field is provided that allows modification of the name. When the OK' button is selected any changes are sent to the website and the display updated accordingly.
Deleting a Name: when the Delete' button 802 is selected with a name specified then the name is deleted from the website.
Upload Scenario When the Upload Scenario' button 807 is selected requests are sent to the web-site in order to upload the currently opened scenario data. If no scenario is currently open then a message is displayed informing the user to open a scenario first. A check is made to determine if a scenario with the current scenario name in the current theme and topic already exists on the website. If it does not then the scenario is created on the website. The theme and topic are also created on the website if they do not already exist. Then the scenario binary files for each supported voice are uploaded to the website. If the theme/topic/scenario already exists on the website then a request is displayed asking the user if they wish to modify the existing scenario. If the user selects yes' then a query is displayed asking the user if they wish to upload new binary files. If the user selects yes' to this query then any changes to the scenario description, golden phrase and test status are uploaded to the website followed by the new binary files. If the user selects no' then only the changes to the description, golden phrase and test status are uploaded.
Upload Voice When the Upload Voice' drop down arrow 808 is selected a list of supported voices is displayed. If a voice is selected then requests are sent to the website in order to upload the voice. A check is made to determine if the voice currently exists on the website. If it does not then the voice is created and the sample audio file is uploaded to the website. If the voice currently exists on the website then a request is displayed asking the user if they wish to change the sample file.
If the user selects yes' then the new sample file is uploaded.
Upload Name When the Upload Name' drop down arrow 809 is selected a list of supported names is displayed. If a name is selected then requests are sent to the website in order to upload the name and its audio representation in each supported voice. A check is made to determine if the name currently exists on the website. If it does not then the name is created and the audio representation file in each supported voice is uploaded to the website. If the name currently exists on the website then a request is displayed asking the user if they wish to change the audio representation files. If the user selects yes' then the new audio representation files are uploaded. Log
When the Log' button 804 is selected a log of the website communication is displayed in the Publish' form as illustrated in Figure 42. The log display can be hidden by selecting the Log' button 804 again or by selecting the Scenarios' button 801, the Voices' button 805 or the Names' button 806.
Theme voice maintenance This allows the maintenance of the voices supported by the development tool.
When the Voices' menu item 307 in the main Theme Development' window 300 shown in Figure 7 (as described above with reference to Figure 7) is selected the Voice Maintenance' form 820 is displayed, as illustrated in Figure 11.
The Voice Maintenance' form 820 allows voices to be added or deleted. The Description field 824 for each voice can be entered and modified. The voice can support audio synthesis if a text-to-speech voice is specified. If audio synthesis is supported the audio sample may be created with the "Create" button 823. In this case the audio sample created is the audio synthesis of the text in the voice description field 824 (e.g. in the illustrated example the audio sample would speak the phrase "Fruity and yet quite smooth but not too"). Alternatively a recording of sample audio data can be indicated so that it can be copied into the development system. When a voice is added it is given the name "VoiceN" where N is the next available number, making sure any gaps are filled. The "Browse" button 821 allows the sample audio data to be located anywhere on the computer and the "Sample" button 822 allows the selected audio data to be heard as a check that it is the correct audio data.
Theme name maintenance This allows the maintenance of the names supported by the development tool.
When the Names' menu item 308 in the main Theme Development' window 300 shown in Figure 7 (as described above with reference to Figure 7) is selected the Names Maintenance' form 810 is displayed, as illustrated in Figure 12.
The Names Maintenance' form 810 allows names to be added (with the Add Name' button 817) or deleted (with the Delete' button 818). If the chosen voice supports audio synthesis then the audio data for the name can be created and compressed either for each supported voice or for all supported voices.
Alternatively a recording of the name in the selected voice can be indicated so that it can be copied into the development system and compressed appropriately.
The Browse' button 815 allows the audio representation to be located anywhere on the computer and the Sample' button 816 allows the selected audio data to be heard as a check that it is the correct audio data. When the Add Name' button 817 is selected a form is displayed in which a name can be entered. A new name is entered in the following format n: name_text' where n is the index that is used to reference the name, and name_text is the actual name. The n index should start at 1 and it must be unique; gaps are allowed but serve no useful purpose.
Stock nhrase maintenance This allows the maintenance of the stock phrases supported by the development tool. When the Stock Phrases' menu item 309 is selected the Stock Phrase Maintenance' form 830 is displayed, as illustrated in Figure 13.
The Stock Phrase Maintenance' form 830 allows stock phrases to entered or modified in the Phrase' field 832. The Event' field 831 allows selection of dolls for which the stock phrases are to be modified, in the illustrated example for only one doll. If the chosen voice supports audio synthesis then the audio data for the stock phrase can be created and compressed either for each supported voice or for all supported voices. Alternatively a recording of the stock phrase in the selected voice can be indicated so that it can be copied into the development system and compressed appropriately. The Browse' button 833 allows the audio representation to be located anywhere on the computer and the Sample' button 834 allows the selected audio data to be heard as a check that it is the correct audio data.
An example of development of a board game scenario with the authoring tool is presented in Appendix C.
SIMPLIFIED AUTHORING TOOL
A simplified authoring tool provided to a user enables the user to generate personalised scenarios. The user writes a scenario with a simple scripting language. In one example, the scenario is written like a play. The user can also be provided with more advanced functionality in which the user can specify particular phrases for a character. The script is imported and the scenario data is generated by the authoring tool as described above.
The simplified authoring tool can also provide facilities for generating the audio file. The user may be provided with the option of recording the phrases, for instance with a microphone connected to a computing device. The simplified authoring tool can provide prompts and instructions for the recording of phrases.
If different voices are required for different dolls, then it may be necessary to record the dialogue phrases in more than one voice.
Instead of recording phrases, the simplified authoring tool can provide facilities for synthesising audio data. If audio files are required for multiple voices this may be more efficient than recording. Audio synthesis can be performed for preset voices with no further user involvement. Alternatively, the voice for which the audio data is to be synthesised can be specified by the user, or the user can select predefined voices from a list. For example, the user might choose to create a scenario with the roles of The SimpsonsTM, to be played with The SimpsonsTM dolls, in which case the appropriate voices of the characters in The SimpsonsTM would be used.
Templates may be provided to facilitate the scripting of scenarios. A template is a predefined skeleton scenario, where the user is prompted to fill in information to create the complete scenario. The skeleton scenario can be structured such that conversation branching in supported, to give the impression of randomness.
To guide the user the template can provide a variety of pre-prepared options, for example a selection of possible phrases from which the user can choose a desired sub-set of phrases. Selection can occur by providing a graphical user interface tht allows a user to visually select desired phrases, for example, by dragging and dropping, by marking a radio button, by selecting a button, or by other means. The sequence of speakers can follow a regular and predetermined pattern, such as A-B-A-B. This allows a simple but efficient text-based graphic user interface, where the user only needs to select the next phrase to be spoken from a pool of possible phrases.
By providing users, particularly children, with the ability to prepare their own scenarios, the doll can become more attractive and fun to play with, and provide a more engaging and exciting experience for users. Furthermore, as users have a creative hand in making their own stories the educational value is also enhanced.
The Story Creator' utility or tool takes the form of a graphic authoring tool. Users can create their own stories using a combination of setting options, text, and selections. A graphical user interface, comprising a storyboard or comic strip is provided for receiving input from a user, and providing guidance and options to the user for user selection. Figures 43a and 43b illustrate the storyboard with comic strip-like representation of conversation elements. A storyboard is composed of a number of panels, with each panel defining a small section of a scenario. A storyboard represents a scenario in its entirety.
The comic strip-like representation provides an easy to understand interface that allows the user to graphically set parts of the scenario, such as speaker sequence and theme. The storyboard also provides the user with an intuitive representation of the user's choices and settings, and also of the scenario itself as it develops. The Story Creator tool enables the user to produce data that can be converted and imported to a doll, without requiring specialist knowledge of the system. The limited nature of the storyboard also enables the length and complexity of a scenario to be regulated so as to comply with limitations of the dolls.
The stories can be stored and shared with other users, which adds additional value for a user. The stories a user creates can be entered into a monthly competition to win story of the month. An incentive can be awarded for completing and creating a new scenario.
The Story Creator tool is an online utility that is part of a larger interactive doll website accessible by users via a computing device. When a user accesses a particular area of the website, a map is displayed providing a number of possible locations, such as an ice cream parlour, a beach, and a pony stable. A user accesses the tool by selecting a location on the map. This enables pre-selection of a setting, corresponding to the theme of the scenario, for example, an ice cream parlour, the beach, or pony stables.
The Story Creator may provide the option of defining an element of the plot', corresponding to a topic, for example: having a talent competition'; finding a surprise'; or having a snack'. A choice of setting (and possibly plot) can inspire the user, and it also enables the Story Creator tool to filter the panels and phrases that are available to the user, thus making the tool more easy to use. A pro-defined story template may also be provided, with the scenario only partially incomplete.
Selection of a setting causes the Story Creator tool to load. The Story Creator tool contains any previously saved scenarios and the option to create a new scenario. Once the setting (and optionally the plot) has been set, the Story Creator tool displays panels with characters (corresponding to roles in the scenario) in the selected setting (corresponding to the theme or topic of the scenario). A storyboard can include for example 20 panels as a default, with the option of adding further panels if required. In Figure 43a, a storyboard view 2000 displays three panels 2002, with each panel 2002 showing two characters 2008 that represent roles that can be played by the dolls. The user can navigate through the whole storyboard with next/previous buttons 2010.
The user is able to select from different panels types. This allows the user to choose how many characters participate in the conversation; whether they are speaking at the same time or one after the other; and the order in which they talking. The panel that the user chooses in one section may restrict the panels they can choose subsequently so that the story flows correctly. Optionally, the Story Creator tool provides resources for the user to generate, modify, and save personalised panels.
The background image 2006 of the panels 2002 represents the setting that is representative of the theme, in the illustrated example at the beach'. Optionally, a choice of setting can cause a menu of sub-options (or topics) to become available for selection; for example, if the setting at the mall' is selected, then the options In a clothes shop', At the ice-cream parlour', and In the car park' are made available.
The characters 2008 depicted in the panels 2002 can be selected by a user and later adapted using a menu. This allows the user to choose a representation that matches a particular doll. Options include for example the user's avatar, a friend's avatar, and a mentor. Optionally, if a friend's avatar is selected, the user can invite the friend to co-create the scenario. Contextual information can be used to adapt the character to a user's own doll or avatar. The characters that appear in each panel can be modified by the user.
Optionally, if a scenario is co-created by more than one single user, each editor retains control over their own doll(s). Alternatively one editor at a time can control the progress of the storyboard, with control being handed over between editors.
Notification can be provided when a co-editor is editing the scenario. A communication panel may be provided between the co-editors.
The user is also able to create monologues or diary entries', which are scenarios played out by a single character. Monologues can be played back on dolls without other participating dolls.
Initially, inside the panels 2002 there are empty speech bubbles 2004 into which users can enter phrases to create the scenario. Each panel can contain a number of speech bubbles, depending on the context of the panel and the number of characters.
For editing the user selects panels one at a time and edits them via an editing view 2020 of the Story Creator tool, as shown in Figure 43b. The user can return to the storyboard view 2000 to review the whole story as a continuous comic strip.
In the editing view 2020 the selected panel 2002 is displayed along with a number of speech balloons 2022 that represent available phrases. Phrases represent one or two lines of dialogue by a particular doll. The user selects a speech balloon 2022 with an available phrase and inserts it in place of an empty speech balloon 2004 by dragging and dropping into the panel 2002 to form a conversation. The user can navigate through all available speech balloons 2022 with suitable buttons 2024. Optionally, the user can type text into an empty speech balloon 2004, thus defining a new phrase.
The selection of phrases can be limited by the choice of setting or plot. A subgroup of recommended' phrases can be shown at the top of the list, with less suitable phrases following. A user can scroll through the balloons until a desired phrase is found. Optionally a searching resource can be provided so as to search within the available phrases. A user can select a phrase by dragging a balloon to a character. A user's choice of phrase can change the phrases that are available for another character, or it can change the options available in the next panel, or it can cause the Story Creator tool to suggest some relevant phrases (the user can of course ignore the suggestions and instead choose other items if desired).
Optionally a phrase can only be used in certain types of panel. A context tag can be associated with phrases to identify suitable phrases for a chosen panel. For example, a laughter phrase is only used in a panel where phrases are spoken simultaneously, so that while a doll says something funny another doll laughs.
Once a panel is completed the user can proceed to the next panel, or review completed panel(s). A pre-set number of panels can be available (for example 20 or 30 panels). Panels can be added as the user proceeds until the scenario is finished. The maximum number of panels that can be added can be limited, for example to 100 panels. The limit can be based on the number of phrases in the panels the user has selected. The Story Creator tool also provides the option of deleting or modifying panels.
Optionally, the phrases the Story Creator tool provides can be adapted to a particular user. For instance, the Story Creator tool may request information from a user before the user can start assembling a scenario -such as the user's doll's name, friend's doll's name, pot, favourite colour etc. When the phrases are presented to the user, certain balloons will have been customised to contain this personalised text, for example "I want to buy a lilac dress, because that's my favourite colour!". Additionally or alternatively doll settings and/or the doll personality and other parameters can be used to adapt the phrases.
The Story Creator tool provides the option of saving the scenario while it is being created, and once it is complete. The Story Creator tool also permits the user to edit or delete a previously generated scenario. The Story Creator tool can provide an option to enter a newly-created scenario into a monthly competition, and/or to start again with a new, blank, storyboard.
The Story Creator tool can provide an option to play back a scenario so that the user can listen to the conversation online as they are editing a scenario to ensure that the scenario makes sense. A karaoke style' visual read-through with text display and cues may also be provided. The user can determine a starting point, so as to listen to a subset of the panels.
When the storyboard is completed the Story Creator tool provides an option for the user to compile (and thereby prepare for download) the newly-created scenario. The Story Creator tool uses a scripting engine to compile scenario data from the phrases selected by the user. The compilation includes obtaining audio data for the dolls to play, and creating instructions for the dolls to follow so that they can play the audio data in the right order. The audio data corresponds to the selected phrases. The audio data is provided in multiple voices so that scenario can be played on different dolls. The audio data can be synthesised on demand and as required (for example for personalised phrases, or for phrases the user has generated), as described above, or it can be made available as pre-recorded data. Optionally a phrase can be spoken in various ways, for example with different inflection or tone. By providing variations within phrases the scenario can be different every time it is played.
The user is notified when their scenario is compiled and ready to download.
Additionally, friends of the user can be notified too. Previously-created, saved scenarios can also be viewed and downloaded. Generally scenarios can be made available to other users, both for viewing and for downloading. Once a scenario has been downloaded to dolls, it can be selected to be used as a conversation like any other scenario.
The Story Creator tool can be updated by adding new locations or settings and -81 -new plots, as well as new phrases. This can enable the Story Creator tool to remain engaging even to frequent users.
Figure 44 shows a schematic block diagram of an apparatus or computing device 2100 that is adapted to provide the Story Creator tool functionality. The computing device 2100 might also be adapted to provide certain aspects of the functionality of the Authoring Tool as described above. The computing device 2100 comprises a processor 2102, a memory 2104 and a database 2106. A graphical user interface module 2108 provides the authoring interface to a display device 2114, and receives user input from an input device 2116. The graphical user interface module 2108 can provide the user input (in suitable form) to a conversation engine 2110 which can generate suitable conversation data. If speech synthesis is required, a speech synthesis module 2112 is provided.
Conversation data can be output to an audio output device 2118.
COMUNICATION INTERFACE
As illustrated in Figure 5, a doll 100 connects to a server 200 via a computing device 260 connected to the internet 262. When a doll connects to a computer, the communication interface software detects the doll, accesses its Unique Identifier (UUID) and send this information to the server to check: 1. The doll is a legitimate doll (the UUID exists) 2. If the doll has been registered yet 3. The doll is registered to the current logged in user 4. What personality traits and other settings are attached to the doll If either of I or 3 are not true, then a message is displayed by the communication interface software, for example stating "The doll you have plugged in belongs to a different account. To continue with this doll, please log into the correct account.".
If the doll has not been registered before (step 2), initialisation of the doll starts. In the course of initialisation the user selects for the doll: * aname * a personality * a voice * other customisable information -doll's pet, doll's pet's name, where doll lives', etc. After selection this information is saved to the doll and also sent to the website.
For initialisation the software may obtain information from the server, such as: * A list of available doll names * A list of pie-set personalities and their corresponding traits * A list of voices (and samples) On subsequent connections the software may * inform the website that it has spoken the golden phrase.
* inform the website that its count of the number of conversations it has been involved in has changed.
* inform the website of how many conversations it has been involved in.
* request the download of a number of Doll Name Audio files in its chosen voice (as described in more detail above with reference to Figure 5).
* request the download of a new theme. This would entail downloading the theme definition file in the doll's chosen voice.
An example of the communication interface, the so-called Air app', for enabling connection and synchronisation, is now described.
A doll cannot connect to a server and/or website directly. Instead, the communication interface acts as a connector between the server/website and a doll. In operation the communication interface is -at the same time -connected to the server/website, and also to the doll. It synchronises the data on the doll(s) with the data on the server/website. If a user changes the scenario, name or attributes for the doll on the server/website then the doll's contents (such as the doll data file residing in the doll) are changed by the communication interface.
The doll's statistics such as usage counters, names encountered and other details are transmitted through the communication interface to the server/website.
The communication interface may additionally act as a basic maintenance tool for the doll and to a certain extent can repair broken dolls automatically.
Figures 45a, 45b and 45c illustrate some exemplary screen shots of different messages and user input requests of the communication interface. In particular the interface is very simple since it is intended to be used by children.
Doll Detection/Registration: the communication interface is in the form of software on the doll owner's computing device (e.g. PC, laptop, tablet) in the user's system tray waiting for a doll to be connected. The communication interface expects a flash drive with the name "I_AM_A_TOY". When such a drive is detected the communication interface begins synchronisation with the server. Figure 45a shows a message requesting a doll to be connected. Figure 45b shows user input fields for the user to sign into his account with the server.
Once the doll is connected, the communication interface then checks to see if the doll is registered to the current user's account. If the communication interface determines that the doll is already registered, but to a different user's account, then a message such as: "The doll you have plugged in belongs to a different account. To continue with this doll, please log into the correct account." is displayed. If the doll is not registered to an account yet, then the user is asked to register the doll with the website, as illustrated in Figure 45c.
Scenario Synchronisation: a scenario that is available on the website can be selected by the user. When a scenario is selected it is queued for download.
Upon synchronisation with a doll the communication interface checks to see if the scenario that is on the doll matches the one that is selected on the website. If a different scenario is selected then the communication interface downloads the scenario from the website and then transfers it to the flash storage on the doll. If the doll does not have a scenario loaded onto it then the selected one on the website is downloaded even if it is not queued.
Name Synchronisation (as described above with reference to Figure 5): the name data of the doll (it's own name as well as the names of other dolls) comes in two parts: * a name identifier: a file which specifies the name(s) (for example in text format, or as a reference such as a name ID number); and * audio data: file(s) of the voice recordings of name(s).
The communication interface checks for the following name changes as part of the synchronisation: * if the user changes the name of the doll on the website then this name is downloaded and transferred to the doll through the communication interface.
* the names references of recently encountered dolls are read from the doll and reported to the website. The communication interface checks to make sure that the name audio data files that are stored on the doll are correct and complete. If any of the name audio files are missing from the doll then the communication interface downloads them from the website and transfers them to the doll. Older names are replaced with the newer ones.
Statistic Synchronisation: as part of the synchronisation process, toy characteristics data, such as the relevant statistics about the doll (e.g. who they have communicated with, how many conversations they have had and whether certain phrases have been said) are communicated back to the website where they are stored in the database and evaluated (e.g. points may be awarded to the doll's online account).
Security: all of the exchanges with the website are encrypted and signed before being communicated with the website. This makes it much harder for someone to corrupt the system and pretend to be a doll. The responses from the website are encrypted and signed and sent back in the same manor.
Debugging: the software incorporates debugging functionality, via a debug menu that enables testers to examine the communication logs and perform administrative tasks such as forcing the doll contents to be completely replaced.
Only authorised users can access the debug menu. The debug menu is only provided for prototype and pre-production use, and is not included in the end user software.
Installation: the installation can be started directly from a website using an install badge or using a classic executable installer. The installer then installs or upgrades the required libraries on the user's computer to the correct version and then installs the app.
Interface: Figures 45a, 45b and 45c illustrate some exemplary screen shots of interfaces for presenting messages from the communication interface to the user, and enabling user input of information to enable the communication interface to perform. In particular the interface is very simple since it is intended to be used by children.
H-BRIDGE CIRCUIT ARRANGEMENT
Figure 46 shows an example of a traditional all-n-channel FET H-bridge amplifier circuit arrangement 900 known from the prior art. The circuit includes amplifier circuitry arranged to connect a power supply (e.g. a battery with voltage VBAT) to a load (e.g. a speaker), for example as part of a portable electronic device. The circuit comprises four FETs, 01, 02, 03, 04, which may for example be power MOSFETs (metal-oxide semiconductor field-effect transistors). Each FET 01, 02, 03, 04 comprises a gate (0), a source (S) and a drain (D). The FETs 01, 02, 03, 04 also each incorporate back-biased diodes 902, 904, 906, 908 from the drain to the source, which can present a reverse-power-connection hazard since there is a current path from ground to supply through these diodes.
In prior art systems such as this, in order to avoid problems associated with reverse polarity (i.e. when the power supply is connected with opposite polarity, such as if the batteries are inserted the wrong way round), a diode 910 is included in the circuit before any of the transistors 01, 02, 03, 04. There is a voltage drop over this diode 910, even during normal operation, creating heat which reduces the efficiency of the system and reduces the available output power. This is particularly a concern with portable electrical devices.
Hybrid H-bridge arrangement Figure 47 shows an amplifier circuit 920 incorporating a hybrid H-bridge arrangement, to address the problems discussed above. The upper pair of transistors 922, 924 (electrically closest to the power supply) are bipolar transistors rather than FETs (each bipolar transistor comprising an emitter, a base and a collector). The bipolar transistors 922, 924 do not have the back-diode (i.e. 902, 904, 906, 908) present in FETs. The other transistors 02, 04 are FETs as before.
The H-bridge circuit 920 shown in Figure 47 is essentially a two-leg pulse-width modulation (PWM) push-pull bridge circuit. On each side of the H-bridge 920, the collector of the bipolar transistor (e.g. 924) is connected to the drain (D) of the accompanying field-effect transistor (e.g. 04). The transistors form inverters compared to the incoming logic (looking at it from a logic-gate point of view). The transistors provide current gain, and are low-power when "off' (i.e. both control signals are high, which pulls the output terminals (e.g. speaker terminals) to ground, but no current, or only a leakage current, flows).
Reverse-biased diodes D9 and D10 ensure that when the power supply connection is reversed (e.g. if the battery is inserted the wrong way round), the base of bipolar transistors 922, 924 remains (approximately) at the same potential as the respective emitters, ensuring that bipolar transistors 922, 924 do not turn on (as bipolar transistors can operate in reverse connection). Thus the current that flows in the case of reverse polarity is limited to a safe level by the resistors R13 and R15 which, in the embodiment illustrated, are each 560 0.
This resistance can be altered to suit an application which has a greater or lower safe back current level.
As indicated in the figures, each diode comprises an anode (A) and a cathode (C), as those skilled in the art will appreciate.
Timing control To control the timing of the circuit 920, small-signal diodes D4, D6, D7, D8 and resistors R12, RiS, R14, R15, R19 and R20 are provided.
The timing circuitry is included with a view to inserting a small amount of dead-band -i.e. to avoid shoot-through current, which can occur if both transistors are on. For circuits with, say, four control signals, this can be performed by delaying the relative edges of the control signals. However, when using a low-cost microcontroller, pin limitations can mean that only two pins are available for use -as illustrated by pins I and 2 of connector J2 in the figures. In this case, the relative timing of the transistors in each half bridge can be controlled by the circuitry alone. An example low cost microcontroller comprises a printed circuit board with eight connection points (pins). Two for the control signals, two for power connection, two for the timing circuitry and two for the speaker output.
Another objective is to have a low stand-by current (i.e. when not driving a speaker, the circuit should consume very low power because the circuit is connected to the battery even when "off').
The inductor L7 at the top of the bridge also helps soak up any short transient shoot-through. When both control signals are high, the circuit is also very low power, since there are no bias currents in this case. The hybrid bridge is unusual in that the top (bipolar) transistors 922, 924 are effectively current controllers, whereas the lower (FET) ones 02, 04 are directly voltage controllers.
Switching sequence The following switching sequence is described with respect to the control signal PWM+. The circuit 920 is largely symmetrical and so the switching sequence for the signal PWM-will substantially correspond, as those skilled in the art will appreciate.
* When the control signal is low, current flows through diode 06 and resistor R14, switching on bipolartransistor 924.
* When the control signal switches from low to high, the gate capacitance of FET 04 coupled with resistor Ri 9 slows the turn-on of the lower transistor 04 slightly. And, diode 07 serves to speed up the raising of the base of bipolar transistor 924, which would be slower without diode D7; this is partly slowed due to the capacitance of diode D9, but also due to carriers lingering in the base of bipolar transistor 924. This allows high current flow briefly while the carriers and charge are injected/removed.
* When the control signal is high, the base of bipolar transistor 924 is pulled up, so no current is flowing there, so no current flows through the bipolar transistor 924. VGS of FET 04 is high, causing it to be on.
* When the control signal switches from high to low, diode D6 helps to delay bipolar transistor 924 turn-on (by its voltage drop appearing like a 0.6 V voltage source), until FET Q4 has begun to turn off and resistor R14 limits the ultimate base current (as we don't want to over-saturate bipolar transistor 924 otherwise it would take longer to turn off).
Protection against damage In the circuit 920 shown in Figure 47, there is no risk of the diodes becoming damaged if an AC input instead of a DC input is applied. In particular, the PNP bipolar transistors 922 and 924 and/or diodes D9 and D10 will not become damaged by an altemating current in this circuit. The body diode 906 of FET Q4 is in series with the effective emitter-base diode of bipolar transistor 924 (so there are two diode drops between pin 2 of Q4 and pin 1 of bipolar transistor 924), but the base is held one diode drop below pin 2 of bipolar transistor 924, so there is no current between pin 3 and pin I of bipolar transistor 924 (emitter and base), effectively because diode D9 "steals" it all. Current does flow through diode D9 and resistor R15, but it is only very small (about 4 mA or 5 mA) -i.e. not the many Amps that could flow if two diodes were connected directly across the supply, which would be the case for a pair of MOSFETs as in the prior art. With the two half-bridges, around 10 mA will flow if the input supply is reversed.
Example component specifications
In the presently-preferred embodiment as illustrated in Figure 47, the following components are used. These are given merely as examples. In particular, alternative values of the components may be used, as those skilled in the art will appreciate.
Reference numeral Component Description/value
02 FET Power MOSFET, MGSF2NO2ELTIG 04 FET Power MOSFET, MGSF2NO2ELT1G 22 Bipolar transistor PNP bipolar transistor 24 Bipolar transistor PNP bipolar transistor D4 Diode n/a D6 Diode n/a D7 Diode n/a D8 Diode n/a D9 Diode n/a D10 Diode n/a R12 Resistor 330 C) R13 Resistor 560 C) R14 Resistor 330 0 R15 Resistor 560 0 R19 Resistor 330 0 R20 Resistor 330 0 L6 Inductor 100 pH [7 Inductor lOpH
HEARTBEAT
Figure 48(a) shows an example signal 940 prior to any modification. The signal is limited to a certain amplitude, shown by line 942. A characteristic signal' 944 is inserted into the signal periodically, shown in Figure 48(b), thereby creating an augmented signal 946. In a preferred example, the characteristic signal 944 has a length corresponding to a single sample period. For a playback device with a sampling rate of 15625Hz this corresponds to a length of 64 microseconds. In a preferred example, the signal 940 is an audio signal.
The characteristic signal 944 is interposed periodically into the audio signal 940.
In a preferred example, this period is every 1000 samples (i.e. a single sample characteristic signal 944 followed by 999 audio samples). This corresponds to a time period of 64 milliseconds for a device with a sampling rate of 15625Hz.
Preferably, the first characteristic signal 944 is inserted as the first sample. The addition of a single-sample characteristic signal every 1000 samples increases the original signal 940 file size by approximately 0.1%. The characteristic signal 944 has an amplitude higher than the amplitude limit 942. This makes the characteristic signals 944 easy to distinguish from the rest of the augmented signal 946, by merely monitoring the augmented signal 946 for deviations above a particular amplitude limit.
The augmented signal 946 is then encoded so as to be transferred to or stored in the playback device. The encoding process (and the subsequent decoding), or the act of storing the file may corrupt the signal. This would result in the quality of the playback suffering, for example audible glitches' or the audio file breaking to silence after a starting. In embodiments where speed is favoured over reconstruction of corrupted data, simply identifying such corrupted files will suffice. The playback device can then skip the corrupted file, or (for example) flag it up for it to be re-transmitted either immediately or at a later time.
In the present example, the playback device determines whether the signal is corrupted by checking the first 2000 samples of the playback (two periods of the characteristic signal 944) for the presence of the characteristic signal 944. Two periods (i.e. 2000 samples) are checked to minimise the possibility of the checking process missing one characteristic signal 944 and mistakenly marking the signal as being corrupted. This checking process involves checking each sample's amplitude and determining whether it is above a threshold. If it is above the threshold 942, the playback device assumes that this is the characteristic signal 944 and hence the signal 940 has survived the encode / decode / store process without becoming corrupted.
The playback device may iteratively perform this check on the file as it is being played, and if two periods pass without the detection of a characteristic signal, the rest of the signal is muted. Alternatively, the playback device could perform the check on just the first two periods of the signal to determine whether the whole file has been corrupted or not. It is preferable for the entire signal to be augmented with the characteristic signal, and the decoder to check for this iteratively as the decoder buffer could theoretically underrun at any point. This could lead to errors such as machine-gunning', where a ring buffer repeats a short section of the signal. Such errors would also be flagged up by checking for the characteristic signal 944.
As the characteristic signal 944 is in this example the highest amplitude section of the signal, any corruption would likely have the greatest effect on this. Therefore, determining if corruption of the characteristic signal 944 has occurred (i.e. determining the presence of a certain amplitude) is a good proxy for corruption of the entire signal.
Figure 49 shows an example flow diagram of the preparation of the augmented signal 946. This process is preferably performed by a device external to the playback device, and the augmented signal 946 is encoded and/or compressed, then transferred to the playback device. A schematic diagram of a suitable device is shown in Figure 50 and is described below.
The process begins at Si with acquiring the signal. Then, the characteristic signal 944 is inserted as the first sample in step S2. The next 999 samples are then left as original, i.e. skipped (step S3) and another characteristic signal 944 is inserted, if the end of the signal has not been reached (shown by feedback step S4). The process may then continue with another signal, as shown by feedback step Sb before ending at step 56.
Figure 50 shows a device 960 having means for carrying out the process shown in Figure 49. Signal modifier module 962 performs the steps described above, utilising the processer 964 and associated memory 966. In use, the original signal 940 is received by the device 960 and stored in the memory 966. This is then passed to the signal modifier 962 which modifies the original signal with the characteristic signal 944 as described above. The augmented signal 946 is then encoded by encoder 968, passed back to the memory 966 where is transmitted to a playback device by signal transmitter 970. This transmission may occur immediately, or at a later time and could be via a wired or wireless link to the playback device. The encoder 968 may not pass the encoded signal to memory 966, rather transmit it directly via signal transmitter 570, shown by a dotted arrow.
Furthermore, the encoder 968 may not be necessary if the playback device can cope with uncompressed (raw) signals. The encoder 968 may comprise many different sub-components, described in more detail below.
Figure 51 shows a flow diagram of the processes prior to and including playback performed by the playback device. The process starts with acquiring the augmented signal at step Si. Prior steps to this may include decoding the augmented, encoded signal (to produce just the augmented signal). The device then determines the amplitude of first sample of the signal in step S2. The next step, 33 comprises comparing the amplitude with a pre-stored level, corresponding to the level above which the sample can be assumed is the characteristic signal 944. If the sample is determined to be a characteristic signal 944, the process jumps to step S6a and the rest of the signal is played by the playback device. If the sample is not determined to be a characteristic signal, the process moves onto the next sample in step 34. This process continues for 2000 samples (shown by feedback to S2), if no characteristic sample is found before sample number reaches 2000 (step 35), the rest of the signal is muted (step S6b).
Figure 52 shows part of the playback device 980 adapted to perform the process described above in relation to Figure 51. In use, the playback device receives the augmented, encoded signal and stores it in memory 982. This is then decoded by decoder 986 utilising processor 984. The decoder 986 may comprise many different sub-components, described in more detail below. The decoded signal is then passed to an amplitude monitoring module 988 which measures the amplitude of each sample. The logic circuitry 990 then determines whether this amplitude is above the pre-determined level and decides an action accordingly.
This may be to send the signal to the output module 992 or feeds back to the amplitude monitoring module 988.
Alternatives and modifications Although the characteristic signal 944 has been described as a single sample length signal, it could be more complex than this. For example, it could be a multi-sample length piece of code. This would afford the advantage of greater potential for determining corruption, but would require longer to process. A person skilled in the art would realise that the trade-off of longer processing to afford greater corruption detection may be worthwhile for some applications and not others.
The period of the characteristic signal 944 is described above as being every 1000 samples. A shorter period (more frequent insertion into the signal 940) would result in faster determination whether a signal is corrupted, but would increase the file size more. Conversely, a longer period would make it slower to determine corruption of the signal, but the file size would be smaller. A person skilled in the art would recognise that different applications would have different priorities and thus the sampling period would be altered accordingly.
Furthermore, although the description above describes two periods being checked for the corrupted signal, more or less could be checked. If just one period is checked, the probability of the checking process missing' the characteristic signal 944 is the greatest. This may be preferred if the checking process is very reliable. However, a reliable checking process may be processor-intensive and thus it may be more efficient to check multiple periods with a less reliable checking process.
The embodiment described above refers to the signal being an audio signal but the apparatus and method could be applied to determine corruption of any signal which has been suitably prepared.
An alternative method utilising similar apparatus to perform the same function is described below. Rather than inserting a characteristic signal 944 periodically into the signal 940, a characteristic frequency could be added to the entire signal.
Thus, when the frequency spectrum of the signal is inspected, there will be distinct peaks corresponding to the added characteristic signal. The characteristic frequency signal would preferably have a frequency outside of the range of the rest of the signal, and outside human hearing (or outside the frequency response range of the playback device). This would ensure that the characteristic signal can be distinguished from the rest of the signal, and does not negatively impact the sound quality.
The characteristic signal could be a mixture of a number of different frequencies so that a number of peaks are shown in the frequency spectrum, or just a single frequency. A single frequency could be distinguished easily, but would be more likely to be missed or subject to frequency-dependent corruption. Using multiple (or a range of) frequencies would increase the file size and potentially increase the processor time necessary to distinguish the characteristic signal from the rest of the signal.
In use, the playback device inspects the frequency components of the signal, for example by performing a Fast Fourier Transform (FFT) on the time-domain signal. This analysis could be performed over the entire frequency spectrum, or just over a range of interest. The latter option would likely be significantly faster.
The resulting frequency spectrum shows the characteristic frequency if the signal has not been corrupted. However, if just a small amount of corruption has occurred, it is unlikely to be able to be detected in this way. One way of improving the accuracy of the method is to include a number of different frequencies at different relative amplitudes. This gives two dimensions' to determine if corruption has occurred. For example, if the relative order of the amplitudes of the characteristic frequencies has changed, it is likely that corruption has occurred.
A potential disadvantage of this method is that the process of inspecting the frequency domain may introduce errors itself, which may make the signal appear corrupted when it in fact not. Furthermore, processes such as FFTs are generally processor-intensive, and applications focussing on speed or with limited resources may not be suited to utilising this alternative method.
OVERLAY BUFFER
Often, due to budget, size or power constraints the processor used in a particular device may have severely limited resources. The Random Access Memory (RAM) component of one such example device (a talking doll, for example) the processor is 8 kilobytes (kB) in size. The Serial Flash Memory device (Flash) requires half of this (4kB) to be reserved as an Erase Buffer, active when the filesystem and Universal Serial Bus (USB) are in use.
This constraint would render the decompression of audio files to be unpractical, thus effectively reducing the length of audio files able to be stored on the device (in one example from 20 minutes to approximately 3.5 minutes). In order to minimise the impact of this constraint on the system design, the Erase Buffer RAM allocation is used for other operational states without conflict with its primary function.
The additional functions and the overlaid entities are shown in Figure 53, which collectively use 4kB of RAM: Speech Playback: * Output Audio Table Of Contents
* Decode Table
* Decode Buffer * (Pulse Width Modulation) PWM Buffer Virtual Electronically Erasable Programmable Read Only Memory (EEPROM) access: * EEPROM Buffer Theme Engine operation: * Attributes Data This allocation of RAM is predetermined at the compile time.
The overlay of these operations onto the 4kB of RAM reserved for the erase buffer means that there are some operational constraints, such as a limitation on speech playback while USB file activity is occurring. Two mutually exclusive groups of actions are therefore formed: a) Flash device erase, i.e. USB file system write access.
b) Speech decompressionlplayback I Theme file processing.
Given that these groups of tasks would rarely be performed simultaneously, the overlay of their RAM allocations would not lead to critical operational constraints.
A memory controller is adapted to control the allocation of RAM, and thus switches the allocation from one group of tasks to the other if required. In one example, the write I erase group of tasks takes priority.
Alternatives and modifications Although the above description refers to an example device which is a talking doll, any device which suffers from similar RAM constraints can utilise the structure described above. The mutually exclusive groups of actions may vary for different devices, but may still fall into the broad categories of a) Write / erase and b) Decode I Read.
COMPRESSION -AUDIO CODING SCHEME
Audio data is used in this system for interaction between dolls amongst other functions. This audio data is compressed and stored in the doll, A trade-off between storage space and decoder complexity determines the nature and extent of this compression. Thus the system described herein responds to a need for a particular set of requirements. In certain examples, such as when audio files are stored in and played by a doll, these requirements are asymmetrical, those for the encoder being very different from those for the decoder.
These requirements are as follows: 1. Very low decode complexity both in space and time.
2. No decoder buffering.
3. Highest possible quality at a wide range of compression ratios.
4. Capable of both lossless and lossy operation.
These requirements are typically present when a large number of low cost decoders are to be used to decode stored audio data of widely varying length at the best possible quality for each length, i.e. essentially lossless for short recordings and progressively lower quality as the recording length increases. This is analogous to choosing the tape speed of an analogue tape recorder to make a best quality recording of a given length of music onto a given length of tape.
The system uses a variable data rate and virtually unlimited encode complexity.
It also has high delay in that the encoder makes multiple passes over its input before any output is made.
The encode process contains a number of steps, shown in Figure 54.
Step 1. Normalize the peak level of the signal. It is expected that the types of signal most suited to this application are professionally recorded samples of single person speech, as required for a speaking toy. In such cases simple level normalization is all that is required. It is also normal that the bit-width of the signal presented to the encoder is greater than that which is required at the output of the decoder due to cost limitations of the circuitry (typically a Digital to Analogue Converter (DAC)) following the decoder.
Step 2E. Apply a curve bender' to the signal, such that the gain is larger for small signals than for large ones. This has an effect similar to telecommunication A-law (or p -law) reducing the audible effect of quantization. The degree of bend' applied may be selected from none (allowing lossless encoding) in multiple stops up to severe. This stop is described in more detail below.
Step 3. The signal is quantized to a selectable number of bits. This quantization determines the level of noise in the final output. If quantization to the required number of bits at the output of the decoder is used, the system is compatible with a lossless encode-decode process.
Step 4. A noise-gate is applied to the signal. This mutes the signal if it remains below a certain, selectable, level for longer than a selectable time period. The intention here is to reduce the data transmission requirement of periods of silence. Normally, when the system is used with professionally edited material, this feature need not be used. When this step is disabled a lossless encode-decode process may be obtained.
Step 5. Select a pre-emphasis filter. Pre-emphasis will be used to apply some order of differentiation to the signal in order to reduce the energy in, and thus the number of bits required to encode, the signal. The purpose here is to whiten' i.e. to flatten the spectrum of the signal. All audio signals, and especially speech signals, contain most of their energy in the low frequency region and progressively less energy in the higher frequencies. All the available orders of filter are tried with the complete audio signal and that which yields the lowest energy is selected. The selection step may not be explicitly performed and may be skipped (shown by dotted arrow) if, for example, the signal has the same features as a previous signal and thus the same pre-emphasis filter is used.
Step 6E. Apply a pre-emphasis filter of the selected order. The type of filter used is the exact bit-for-bit inverse of the de-emphasis filter to be used in the decoder. Thus the "pm-emphasis feeds de-emphasis" chain is always lossless.
Step 7. Generate a probability table from the resulting signal and from this generate tables for, for example, Huffman encode and decode of the signal.
Huffman coding is one (preferred) example of a lossless coding technique. Other coding techniques such as Shannon-Fano, could also be used. Similarly to step 5 above, generation of this table could be skipped (shown by dotted arrow) if the same encode table is used as has been on a previously encoded signal.
Step SE. Apply the Huffman encode table to the signal to generate an encoded (compressed) version of the signal.
Step 9E. Send the selected curve bender amount (from Step 2), pre-emphasis filter order (from Step 5), Huffman decoder table (from Step 7) and Huffman encoded version of the signal (from Step 8) to the decoder. It is not necessary to send the value of the number of bits used in the quantization at Step 3 to the decoder.
The decode process, though containing several steps, is significantly simpler.
This provides the advantage of being able to have a much simpler decoder to the encoder, and the process of decoding is less processor intensive. This is particularly advantageous in applications where the decoder is a simple device such as a toy, or is required to perform the decode process quickly.
These steps are numbered in reverse order to correspond with the steps in the encode process. They are executed in reverse order from 9D to 2D, and shown in Figure 55.
Step 9D. The different parts of the data from the encoder are separated for individual use by the following steps. The Huffman encoded version of the signal and the Hufiman decoder table for Step 8D, the pre-emphasis filter order for Step SD and the curve bender amount for Step 2D.
Step SD. Apply the Huffman decode table to the encoded signal to generate a decoded (uncompressed) version of the signal.
A
Step 6D. Apply a de-emphasis filter of the selected order to reverse the effect of the filter in 6E.
Step 2D. Apply an inverse curve bender to the signal to undo the transformation done by Step 2E.
Detailed description of each step
Each step (and its inverse if there is one) is described below in more detail.
Step 1. Normalize the peak level of the signal. The whole audio input is scanned for the sample having the maximum magnitude, i.e. the maximum absolute value, all samples are then divided by this value. The maximum value found in the resulting signal is exactly one.
Steps 2E and 2D. In order to make the decode process as simple as possible, the curve bender selected has a very low decode complexity. To make the system capable of lossless and lossy transmission, and to perform well over a range of compression ratios, the curve bender has an easily controlled depth of effect. This is unlike the telecoms A-law', which was designed to be simple to implement in hardware (gates and wires) as opposed to instructions on a microprocessor. The decode process is described below first as it is the simpler process, but it is of course performed afterwards.
The decode process is dictated by the expansion required between the input and output ranges, the input range for the decoder is -g to +g, the output range is -1.0 to +1.0. For ease of implementation, values of g that correspond to binary shifts of the data are preferred.
An example decode equation used is y = x + ax3, where x is the input value (signal amplitude) and y is the output value.
-10 0-It may be appreciated that this relation may be economically calculated in four basic operations, three multiplies and an add. Traditional A-law type curve benders, which rely on multiple shifts, were designed for easy hardware implementation and require many instructions to implement in modern software controlled microprocessors. Thus, a cubic relationship such as the one above is easy to perform by a simple device containing a microprocessor. This family of curve benders' also only have one variable, a. This means that only one value needs to be sent to the decoder (from the encoder) to fully define the curve bonder used.
The value of a for any g can be calculated by a = (1 -g) / g3.
For ease of implementation values of g that correspond to binary shifts of the data are preferred. These are selected by an index used to communicate between the encoder and the decoder. Typical values of g and a are given in
table I below:
index g a 0 1.0 0 1 0.5 4 2 0.25 48 3 0.125 448 4 0.0625 3840 0.03125 31744 Table I -Example values of g and a Figure 56 shows the curve bender graphs for the various values of g as in Table 1. The encode curve bender is the same as this, but reflected on the y=x axis.
The encode equation is somewhat more complicated. Although the relationship between x and y depends only upon a, temporary variables u and v are included to simplify the equation: u = cube_root(xI(2a) + square_root((x/(2a))2 + (1I(3a))3)) v = cube_root(1J(2a) + square_root((1/(2a))2 ÷ (1I(3a))3)) y = (u -1I(3au))I(v -1/(3av)) Since low encode complexity is not a primary aim of the process the added complexity of this process is not a problem. The curve bender' is thus a computationally asymmetric process, which affords the advantage of a being able to use a far simpler decoding device to encoding device.
Step 3. The signal is quantized to a selectable number of bits. Quantization is a well understood process. For best results dither should probably not be applied to this quantization stop as the effect of the decode curve bender (Step 2D) is to increase the perceived level of such noise. Noise shaping may be applied to this step if the Signal to Noise Ratio (SNR) is needed to be improved. Noise shaping is a well understood process.
Step 4. A noise-gate is applied to the signal. Noise gates are well understood devices. As stated above, in well edited material this step may be omitted. In the Huffman encoding stage (Step 8E) it is most likely that silence will be encoded as a very small number of bits per sample, preferably one.
Step 5. Select a pre-emphasis filter.
The type of pre-emphasis filter to be used consists of a cascade of some number of simple first order differences. Such a first order difference may be characterized by the following relationship.
w[n] = x[n] -x[n -1] Where x[n] stands for the value of the encoder input at time n, x[n -1] stands for the value of the encoder input on the previous sample, and w[n] stands for the filtered signal value at time n. Such a filter can be performed by a differentiator circuit, such as a high-pass filter.
The inverse filter for a first order difference is a first order integrator characterized by the following.
-10 2-y[n]=w[n]+y[n-l] Provided that the state variables in both the encoder and decoder filters are reset before operation, an encode-decode cascade is bit-for-bit accurate.
To select the filter order to be used the complete audio input is passed through all the possible filter orders. For each order, a probability table of the filter outputs is prepared, and from this the information content is calculated in bits per sample according to standard information theory. The filter order which produces the most efficient use of data is selected.
Step 6E and Step 6D. Apply pre-emphasis and de-emphasis filters of the selected order as part of the encoding and decoding processes.
Step 7. Generate a probability table from the resulting signal and from this generate tables for Huffman encode and decode of the signal. The crucial aim here is to minimize the size of the decode table while recognising that the decode process must make extremely low demands on the available decode processor.
The size of the decode table is not only important since it constitutes pad of the data payload that must be transported from the encoder to the decoder. It is also important since the whole of the decode table, unlike the encoded audio data, must be available to the decoder processor during the decoding of every data sample. This requires that the decoder table be placed in the decode processor RAM, rather than in any attached slow memory. Decode processor RAM is a very scarce resource in certain applications.
The classic Huffman decode structure is a binary tree. The decode process begins at the root and descends from node to node, via either the left or right branch depending on whether the data bit under consideration is a 10, or a M'. At each node the decoding process either stops with the output of a sample or continues to descend the tree. This approach has the great advantage of simplicity and speed in execution, but can require a large data structure to hold the decode tree. Other methods can involve smaller data structures with very much more complex and time consuming decode algorithms. A third, hybrid, -10 3-class of solutions involves using the simple decode tree but performing data compression on the tree for transmission; unfortunately this doesn't help when the memory available inside the decode processor is itself restricted.
The method to be used here is a classic decode tree, for minimum processor load, but packed in a manner that minimizes memory requirements without requiring more than trivial unpacking.
Each node of the classic tree can be of one of four types, depending on whether each side is an output value or a pointer to another node. In the method used here only one of the one-value-one-pointer types are used, any of the nodes of the encode/decode tables that require the unused type are flipped to become the used type.
The decode table consists of a sequence of entries (one per node) which each contain one or two items. In this implementation each item is composed of 16 bits, the maximum size of an output value is 14 bits and the maximum number of items in the table can be covered by an index of 15 bits.
The entries are of three possible types.
Type A. An entry with no children, but which contains two actual output values the first (J) applies for a data bit of zero, the second (K) applies for a data bit of one. The values of J to be used will need to be sign extended over the two MSBs on decode. The value of K to be used is sign extended over the two MSBs during encode, shown as SS below. There are no child entries, both data zero and data one terminate the decoding of an output value.
This type of entry requires two items in the array:- 11JJ JJJJ JJJJ JJJJ
SSKK KKKK KKKK KKKK
Type B. An entry with one child, thus it contains one actual output value (J) which applies for a data bit of zero. A data bit of one implies that the child immediately follows this entry. The value of J to be used will need to be sign extended over the MSBs on decode. The child follows this entry immediately and thus no index is required.
10JJ JJJJ JJJJ JJJJ Type C. An entry with two children, it contains one index (P) which is taken for a data bit of zero. A data bit of one implies that the child immediately follows this entry.
The value of P to be used can include the MSB which is zero.
0PPP PPPP PPPP PPPP It is of course possible to design more compact codes, the important point here is to be compact and easy to process.
Used as a Huffman decoder for a possible N output values the minimum possible table size is N items, the maximum is 3N12-1, that is approximately from 1.0 to 1.StimesN.
Step SE and Step SD. Apply the Huffman encode and decode processes using the tables generated by Step 7. The method here follows directly from the
description of Step 7.
Step 9E and Step 9D. These are simple packing and unpacking steps. The encoder information may be transmitted wirelessly or via wires, and is preferably not encoded, or is encoded in a pre-defined manner in which lossless recovery is assured.
Figure 57 shows a schematic diagram of an encoder. The encoder 1000 receives a signal and is stored in the memory 1012. This signal is then normalised by signal normaliser 1006. The normalised signal is then modified by signal modifier 1008. This includes applying a curve bender and quantizing the signal as described above. The signal is then passed through a noise gate 1010 to remove -10 5-redundant low amplitude sections. A signal analyser 1012 analyses this signal and selects a pre-emphasis filter from memory (shown by dashed line) for the filter module 1014 to apply. A similar process is then undertaken by the encode table generator 1016 which selects a suitable table from memory 1002 and the encoder module 1018 applies the encode table to the signal. The signal and the associated encoder information is then transmitted by the signal transmitter 1020.
Figure 58 shows a schematic diagram of a decoder 1040. The decoder 1040 receives an encoded signal and encoder information, and stores this in memory 1042. The signal splitter 1046 splits the encoder information from the encoded signal. The signal is then passed to the decoder, which uses the encoder information from the memory 1042 (shown by dashed line) to decode the signal.
The decoded signal is then filtered by filter 1050, again, using the filter information from the encoder information via the memory 1042. This signal is then modified by signal modifier 1052, for example, an inverse curve bend applied, the value of the inverse curve bend taken from the encoder information via the memory 1042. The signal is then output via output module 1054.
Figure 59 shows an exponentially decaying sine wave, this is typical in many respects of the filtered pulse attributes of the human voice. Figure 60 shows this waveform after various different curve benders are applied. The different plots show the waveform after curve benders with different values of g, as in Table I above, are applied. The effect of the curve bender is to increase the relative amplitude of the tail. Note that the large amplitude of the signal at the left, and the nearly zero amplitude of the signal at the right are almost unchanged; the major effect is upon the size of the signal at medium amplitudes.
Figure 61 shows a similar graph to the one shown in Figure 60 for a linearly decreasing sine wave, where the effect on the middle amplitude section is clearer. The effect of the curve bender is to increase the level, and hence the fidelity of coding, of middle range signals.
It will be understood that the present invention has been described above purely by way of example, and modifications of detail can be made within the scope of the invention.
-10 6-Each feature disclosed in the description, and (where appropriate) the claims and drawings may be provided independently or in any appropriate combination.
Reference numerals and/or titles appearing in the claims are by way of illustration only and shall have no limiting effect on the scope of the claims.
-10 7-Appendix A: Theme Data File The Theme Data File consists of a Header section, a number of Role Context sections and a Data Pool section.
Header
This consists of the following fields:
VERSION_NUMBER Theme Data Format Version Number (2 bytes) (0) CHECKSUM Checksum of the Theme Data (4 bytes) (2) THEME_ID Unique identification of the theme (4 bytes). (6) GOLDEN_PHRASE Phrase specification of the Golden Phrase (2 bytes). (10) NUMBER_OF_ROLES The number of roles supported in this theme (2 bytes). (12) NUMBER_OF_THEME_ATTRIBUTES The number of theme attributes (2 or more) (1 byte). (14)
NUMBER_OF_ROLE_ATTRIBUTES The number of attributes per role (1 or more) (1 byte). (15) NUMBER_OF_CONTEXT_ENTRIES The maximum number of context entries for each role (2 bytes). (16) AUDIO_OFFSET Offset to start of audio section. Zero if no audio (2 bytes). (18) OFFSET_TO_DATA_POOL Offset in this file to the start of the data pool (2 bytes).
ROLE_i_CHARACTER Character definition for Role_i (4 bytes).
OFFSET_TO_ROLE_i_CONTEXT Offset to the start of the Role_i context entries (2 bytes).
ROLE_2_CHARACTER Character definition for Role_2 (4 bytes).
OFFSET TO ROLE 2 CONTEXT Offset to the start of theRole2 context entries (2 bytes).
ROLE_N_CHARACTER Character definition for Role_n (4 bytes).
OFFSET_TO_ROLE_N_CONTEXT Offset to the start of the Role_n context entries (2 bytes).
Role Context Section There is one of these for each role defined in the header. Each Role Context Section consists of NUMBER_OF_CONTEXT_ENTRIES context entries.
Context Entries consist of the following fields:
STATEMENT_MODE Timing specification of statement (1 byte).
CONDITION_BLOCK_INDEX Index into the data pool of a condition block (2 bytes).
STATEMENT_CHOICE_BLOCK_INDEX Index into the data pool of a statement choice block -10 8-(2 bytes).
BRANCH_CHOICE_BLOCK_INDEX Index into the data pooi of a branch choice block (2 bytes).
SET_ATTRIBUTE_INDEX Index into the data pooi of a set attribute block (2 bytes).
CONDITION_METHOD Index of the condition handler (4 bits).
SAY_METHOD Index of the say handler (4 bits).
TRANSITION_METHOD Index of the role transition handler (4 bits).
BRANCH_HANDLER Index of the context branch handler (4 bits).
SET_METHOD Index of the set attribute function (4 bits).
UNUSED_METHOD Currently unused (4 bits).
Data Pool The data pool consists of a number of DATA_BLOCKS.
DATA BLOCK
This is a set of n+i 2-byte values: VALUE_i contains n the number of remaining values.
VALUE_2 VALUE_Ni The interpretation of the values 2 to n+i depend upon the context.
CONDITION BLOCK
This is a DATA_BLOCK where:
VALUE_2 is an attribute specification.
VALUE_3 to N+i are phrase specifications.
The number of conditions is n-i.
STATEMENT CHOICE BLOCK
This is a DATA_BLOCK where: VALUE_2 is a weight wi.
VALUE 3 is an index to a statement block si.
VALUE_4 w2.
VALUE_s s2. Etc.
The number of choices is n/2.
STATEMENT BLOCK
This is a DATA_BLOCK where:
VALUE_2 to VALUE_N+i are phrase specifications.
-10 9-The number of phrases is n.
BRANCH CHOICE BLOCK
This is a DATA_BLOCK where: VALUE_2 isaweightwl.
VALUE_3 is an index to a role context ci.
VALUE_4 w2.
VALUES c2. etc.
The number of choices is n12
SET ATTRIBUTE BLOCK
This is a DATA_BLOCK where:
VALUE_2 is an attribute specification al
VALUE_3 is an index to a set choice block bi
VALUE_4 is a set mode specification si.
VALUE 5 a2.
VALUE_6 b2.
VALUE_7 s2. Etc.
The number of attributes to set is n/3.
SET CHOICE BLOCK
This is a DATA_BLOCK where: VALUE_2 is a weight wi.
VALUE_3 is an index to a phrase specification p1..
VALUE_4 w2.
VALUE_S p2.
PHRASE SPECIFICATION
This is a 16 bit value that has the following meanings:-OxOinn -A reference to the value stored in Role(i).Attribute(nn).
Oxlnnn -A reference to the value stored in Theme.Attribute(nnn).
Ox2nnn -A reference to the value stored in current_speaker.Attribute(nnn).
Ox3nnn -A reference to the value stored in previous_speaker.Attribute(nnn).
Ox4nnn -A reference to the value stored in next_speaker.Attribute(nnn).
OxSnnn -A reference to phrase audio data (nnn). Where OxS000 is a null reference OxSOOl is a ref to the Sole Doll" event audio data 0x5002 is a ref to the "Scenario mismatch" event audio data 0x5003 is a ref to the No role available" event audio data 0x5004 is a ref to the Exception" event audio data OxSOOS is a ref to the Battery Low" event audio data OxSOOG+ are refs to the scenario phrases.
Ox6nnn -A reference to name audio data (nnn).
Ox7nnn -numeric value nnn.
Ox8nnn -Dolls Data value where nnn = 0 means Active dolls count nnn = 1 means current active doll number nnn = 2 means previous active doll number nnn = 3 means next active doll number Ox9nnn -A reference to the values stored in the role attribute nnn of each present doll.
Oxl0nnn -A reference to the values stored in role attribute nnn of all defined roles.
Appendix B: Example of a script with modifiers and custom spoken text that produces a board game scenario When imported this script produces a snakes and ladders playing scenario for up to 6 roles. The context keywords and the spoken text are displayed in bold and comments are displayed in italics.
Theme: Games Topic: Board Scenario: Snakes and Ladders vi Description: Up to 6 dolls play a jolly game of snakes and ladders II define the roles Role: Mary,0,0,0,0,0,0,0,0 Role: Alison,0,O,0,O,0,0,O,0 Role: Liz,0,0,0,0,O,0,0,O Role: Evie,U,0,0,0,0,0,0,0 Role: Sienna,O,0,0O,0,0O,0 Role: Danielle,0,0,0,0,0,0,0,0 II define some phrases that must be saved in order Phrase: 0 Phrase: 1 Phrase: 2 Phrase: 3 Phrase: 4 Phrase: 5 Phrase: 7 Phrase: 8 Phrase: 9 Phrase: 10 Phrase: 11 Phrase: 97 Phrase: 98 Phrase: 99 Phrase: 100 GoldenPhrase: Hello sailor II start -pick any doll at random -go to setup Me: {GoTo<Random,[(1,setup)]>,Any} //* setup scene Scene: setup II setup -define the snakes and ladders Me: {Set<Randomtheme.ladderi bottom=[(i 4)], theme.ladderitop=[(i39)], theme.ladder2bottom=[(i 26)], theme.ladder2top=[(i 75)], theme.ladder3bottom=[(i 33)], theme.ladder3top[(i 52)], theme.ladder4bottom=[(i 59)], theme.ladder4top=[(i 63)], theme.Iadder5bottom=[(1 73)], theme.Iadder5top[(1 93)], theme.snakel bottom=[(1 8)], theme.snakeltopL(1 36)], theme.snake2bottom=[(1 12)], theme.snake2top=[(1 30)], theme.snake3bottom=[(1 50)], theme.snake3top=[(1 70)], theme.snake4bottom=[(1 57)], theme.snake4top[(1 86)], theme.snake5bottom=[(1 42)], Iheme.snake5top=[(1 99)], theme.nullphrase=[(1,nuID]>} II setup:1 -choose a unique counter colour for each present role Me: {Set<Unique,each.counter=[(1 red'), (1 green'), (1 yellow'), (1 blue'), (1 white'), (1,black')]>} II setup:2 -set each present role's position = 0, each state = turn' and dolicount = 0 Any: {Set<Random,each.positionL(1 0)], each.state=[(1 turn')], theme.dollcount=[(1,0)]>} II setup:3 loop -increment dollcount -goto main if do//count > number of present dolls otherwise goto setup:4 Me: {SetcRandom,theme.dollcount+[(1,1)]>, If<theme.dollcount>[dolls.Count]>, GolocCondition,[(1,setup:4),(1,main)]>,Me} II setup:4 <= number of present dolls so say my counter colour -loop back to setup:3 for next doll Me: {GoTo<Random,[(1,setup:3)]>,Next} I'll have the [me.counter] counter Mine's the [me.counter] counter I want the [me.counter] counter (Alison) It's the Lme.counter] counter for me I want the [me.counter] counter (Liz} like I want the&like [me.counter] counter I/notice the use of custom_spoken_text to customise how Alison and Liz speak here I/end of setup main scene Scene: main 7/ main -call appropriate sub routine based on the value of the state attribute each subroutine returns to main Me: {If<me.state['turn','snake','Iadder','end_turn','win']>, GosubcCondition,[(1,main),(1,init),(1,turn),(1,snakes),(1,Iadders),(1, end_turn),(1 win)]>, Me} 7,1* win scene Scene: win II subroutine win -announce winner and end Me: {GolocRandom,[(1,end)]>,Me} I win yippee&I win Great&I'm the winner //******** end_turn scene Scene: end_turn II subroutine end_turn -check if 6 was thrown Me: {Set<Random,me.state = [(1,turnD],If<theme.dice=[6]>,GoTo<Condition,[(1 end turn:2),(1 end turn:1)]>,Me} II end_turn:1 -6 thrown so get another turn -return to main for current do/I Me: {GoTo<Random,[(1,Return)]>,Me} I get another go I yippee&I get to go again II end_turn:2 -say who's turn is next -return to main for next doll Me: {GoTo<Random,L(1,Return)]>,Next} Your turn [next.Nanie] mit scene Scene: mit II subroutine mit -called if new do//joins, so state is unset -select a remaining colour -announce colour Me: {SetcUnique,me.counter=[(1 red'), (1 green'), (1,'yellow'), (1 blue'), (1 white'), (1,black')]>} I'll have the [me.counter] counter I Mine's the [me.counter] counter I want the [me.counter] counter II init:1 -set new role's position = 0, state = turn' and return to main Me: {SetcRandom,me.position=[(1 0)], me.state[(1,turn')]>,GoTo<Random,[(l Return)J>,Me} // turn scene Scene: turn II subroutine turn -roll the dice and say what you've rolled Me: {Set<Random,therne.dice=[(1 1), (1,2), (1,3), (1,4), (1,5), (1,6)], theme.dicephrase=[(1 0')], theme.dicephrase+[(1,theme.dice)]>} It's myturn&rattle rattle&I've just rolled a [thenie.dicephrase] II turn;1 -say how many square(s) to move the correct colour counter Me: {SetcRandom,therne.dicecotjnter=[(1,thenie.dice)]>, If<theme.dice>[1]>,SaycCondition,0>} Now I must move the (me.counter] counter (theme.dicephrasej square I Now I must move the [me.counter] counter [theme.dicephrase] squares II turn:2 count down dicecounter, increment position and say each new position -check and return if winner with state set to win' Me: {SetcRandom,therne.dicecounter-[(1, 1)], me.position+[(1,1)], me.posphrase=[(1 0')], me.posphrase+[(1,me.position)], me.state=[(1 win')]>, If<me.position=[1 00]>,GoTo<Condition,[(1,turn:3),(1,Return)]>,Me} [me.posphrase] II turn:3 goto turn:2 if still counting -return if finished counting with state set to snake Me: {Set<Random,me.state=[(1,snake')]>,If<theme.dicecounter=[O]>, GoTo'cCondition,[(l turn :2),(1,Return)]>,Me} //******** snakes scene Scene: snakes II subroutine snakes -check if at mouth of a snake -return if not with state set to ladder Me: {SetcRandom,me.state=[(1 ladder')]>, Ifcme.position=[theme.snakeltop, theme.snake2top, theme.snake3top, theme.snake4top, theme.snake5top]>, GoTo<Condition,[(1,Return),(1,snakes:1)]>,Me} II snakes:1 -set new position to tail of appropriate snake Me: {SetcCondition,me.position=[(1,me.position), (1,theme.snakel bottom), (1,theme.snake2bottom), (1,theme.snake3bottom), (1,theme.snake4bottom), (1,theme.snake5bottom)]>} II snakes:2 -say new position and return Me: {Set<Random,me.posphrase[(1 0')], me.posphrase+[(1 me. position)]>, GoTo<Random,[(1,Return)]>,Me} Oh dear I have to go down the snake to square Lme.posphrase] /1* ladders scene Scene: ladders II subroutine ladders -check if at bottom of a ladder -return if not with state set to end turn' Me: {SetcRandom,me.state=[(1 end_turn')]>, If<me.position[theme.Iadderl bottom, theme.Iadder2bottom, theme.Iadder3bottom, theme.Iadder4bottom, theme.Iadder5bottom]>, GoTo<Condition,[(1,Return),(1,Iadders:1)]>,Me} I/ladders: I -set new position to top of appropriate ladder Me: {Set<Condition,me.position[(1,rne.position), (1,theme.Iadderltop), (1,theme.Iadder2top), (1,theme.Iadder3top), (1,theme.Iadder4top), (1,theme.Iadder5top)]>} II ladders:2 -say new position and return Me: {Set<Randomme.posphrase=[(1 0')], me.posphrase+[(1,rne.position)]>,GoTocRandom, [(1,Return)]>,Me} Hurrah I can go up the ladder to square [me.posphrase] Appendix C: Example of development of a board game scenario with the authoring tool The authoring tool described above is suitable for the production of many varied types of interactions between roles, from simple linear conversations to complex situations involving many interdependencies like, for example, when playing games. The following is an example, produced using the tool, of up to 6 roles playing a game of "Snakes and Ladders". It is essentially in the ".tmx" format used by the development tool for saving scenarios. What follows is an annotated excerpt from the complete "Snakes and Ladders.tmx" file for just one role. The contents for the other roles are essentially similar but could be modified to produce more variation and characterisation of the roles. Each annotated section below corresponds to one context entry as described above.
{Mary,start,SetcRandom,theme.ladderl bottom=[(1,4)], theme.ladderl top=[(1,39)], theme.ladder2bottom=[(1,26)], theme.ladder2top=[(1,75)], theme.ladder3bottom=[(1 33)], theme.ladder3top=[(1,52)], theme.ladder4bottom[(1 59)], theme.ladder4top[(1,63)], theme.laddersbottom[(1 73)], theme.ladderstop[(1,93)], theme.snakelbottom=[(1,8)], theme.snakeltop=[(1 36)], theme.snake2bottom=[(1,1 2)], theme.snake2top=[(1 30)], theme.snake3bottom=[(1,50)], theme.snake3top=[(1 70)], theme.snake4bottom[(1,57)], theme.snake4top[(1 86)], theme.snakesbottom[(1,42)], theme.snakestop[(1 99)], theme.nullphrase[(1,nulO]>,null,null,GoTocRandom,[(1,start:1)]>,Any} Mary's "start" context entry. This defines the end points of the snakes and the ladders then goes to "start:1".
{Mary,start:1,SetcUnique,all.counter=[(1,red'), (1, green'), (1 yellow'), (1 blue'), (1,white'), (1,black')]>,null,null,GoTocRandom,[(l,start:2)]>,Me} Mary's "start:1" context entry. This chooses a colour for all the roles' counters randomly from a choice of red, green, yellow, blue, white or black, making sure each role gets a different colour.
{Mary,start:2,SetcRandom,all.position[(1 0)], theme.Transition[(1,dolls.Me)]>,null,SaycRandom,0,[(1 i'll have the',me.counter,'counter')]>,GolocRandom,[(l,start:3)]>,Next} Mary's "start:2" context entry. This initialises all the roles' postions to zero, stores the currently active doll in theme.Transition, says "I'll have the "my colour" counter". Then selects the next doll and goes to context entry "start:3" {Mary,start:3,null,lf<theme.Transition=[dolls.Me]>,Say<Condition,0,[(1, i'll have the',me.counter,'counter'),(l,theme.nullphrase)]>,GoTo<Condition,[(1, start:3),(1,s tart:4)]>,Next} Mary's "start:3" context entry. This essentially loops through all the present dolls until it gets back to the first one so that each doll can say "I'll have the "my colour" counter". Then goes to context entry "start:4" when finished looping.
{Mary,start:4,null,null,null,Gosub<Randoni,[(1,start:5),(1,turn)]>,Me} Mary's "start:4" context entry. This calls the sub-routine "turn", described below to roll the dice and move the counter. It returns to "start:5" when finished.
{Mary,start:5,null,lf<me.position>[99]>,null,GoTo<Condition,[(1,start: 7),(1,start:6)] > , M e} Mary's "start:5" context entry. This checks if Mary's position is greater than 99. If it is then go to "start:6", if not then go to "start:?".
{Mary,start:6,null,null,Say<Random,0,[(1 i win'),(l,yipee','i win'),(l,great','i'm the winner')]>,GoTocRandom,[(l,end)]>,Me} Mary's "start:6" context entry. As Mary's position is greater than 99 she has won the game so she says either "I win" or "Yipee I win" or "Great I'm the winner" then goes to "end", which stops the game.
{Mary,start:7,null,null,null.Gosub<Randorn,[(1,start:8),(1,snakes)]>, Me} Mary's "start:7" context entry. This calls the sub-routine "snakes", described below to check if Mary is at the top of a snake and to take the necessary action. It returns to "startS" when done.
{Mary,start:8,null,null,null,GoSub<Random,[(1,start:9),(1,ladders)]>, Me} Mary's "start:S" context entry. This calls the sub-routine "ladders", described below to check if Mary is at the bottom of a ladder and to take the necessary action. It returns to "start:9" when done.
{Mary,start:9,null,lf<theme.dice=[6]>,null,GoTo<Condition,[(1,start:1 1),(1,start:1 0) ]> , M e} Mary's "start:9" context entry. This checks if Mary's last throw of the dice was a 6.
lfso go to "start:10", if not goto "start:11".
{Mary,start:1 0,null,null,Say'cRandom,O,[(l i get another go'),(l,yipee','i get to go again]>, GolocRandom, [(1,start:4)]>,Me} Mary's "start:lO" context entry. As Mary's last throw was a 6 she gets another throw. So she says either "I get another go" or "yippee, I get to go again". She keeps control and then goes back to "start:4" above, which calls sub-routine "turn".
{Mary,start:1 1,null,null,Say<Random,0,[(1 your turn',next.Name)]>,GoTo<Random,[(l,start:4)]>,Next} Mary's "start:ll" context entry. As Mary didn't throw a 6 she selects the next doll and goes back to "start:4".
{Mary,turn,Set<Random,theme.dice=[(1,1), (1,2), (1,3), (1,4), (1,5), (1,6)], theme.dicephrase=[(1,'O')], theme.dicephrase+[(1,theme.dice)]>,null,SaycRandom,0,[(1,it's my turn','rattle rattle','i've just rolled a',theme.dicephrase)]>,GoTocRandom,[(l,turn: 1)]>,Me} Mary's sub-routine "turn" context entry. This randomly chooses a value of 1 to 6 for Mary's dice. Sets the dicephrase to 0' adds Mary's dice value to the dicephrase, which produces a reference to the audio representation of Mary's dice value. Then she says "it's my turn" "rattle rattle""I've just rolled a "dice value". Then she goes to "turn:l" {Mary,turn:1,SetcRandom,theme.dicecounter=[(1,theme.dice)]>,lfctheme.dice> [1 ]>,Say<Condition,0,[(1 now i must move the',me.counter,'counter',theme.dicephrase,'square'),(l,now i must move the',me.counter,'counter',theme.dicephrase,'squares')]>,GoTo<Random,[(l, turn:2 )]>,Me} Mary's sub-routine "turn:l" context entry. This basically checks if mary's throw was greater than 1. If it was she says "now I must move the "my colour" counter "dice value" squares", that is plural squares. If not she says "now I must move the "my colour" counter "dice value" square", that is singular square.
{Mary,turn:2,Set<Random,theme.dicecounter-[(1,1)], me.position+[(1,1)], me.posphrase=[(1,O')], me.posphrase+[(1,me.position)]>,lf<me.position=[1 00]>,Say<Random,0,[(1,nie.p osphrase)]>,GoTocCondition,[(1,turn:3),(1,Return)]>,Me} Mary's sub-routine "turn:2" context entry. This subtracts 1 from dicecounter, adds 1 to Mary's position, makes sure that posphrase references the audio representation of Mary's position, checks if Mary's position = 100. Says her position and then if her position is 100 she returns from the sub-routine, if not she continues to "turn:3".
{Mary,turn:3,nuII,If<theme.dicecounter=[0]>,nuII,GoTo<Condition,[(1, turn:2),(1,Re turn)]>, Me} Mary's sub-routine "turn:3" context entry. This checks if dicecounter = 0 if it does then Mary has finished this turn so she returns from the sub-routine, if not then she goes back to "turn:2". The net result of "turn:2" and "turn:3" is to count up her position the exact number she has just thrown on the dice.
{Mary,snakes,null,lf<me.position=[theme.snakeltop, theme.snake2top, theme.snako3top, theme.snako4top, theme.snakestop]>,nuIl,GoToccondition,[(1,Return),(1,snakes: 1)]>,Me} Mary's sub-routine "snakes" context entry. This checks if Mary's position is at the top of any of the snakes. If it is she goes to "snakes:1", if not she returns from the sub-routine.
{Mary,snakes:1,Set<Condition,me.position=[(1,me.position), (1,theme.snakel bottom), (1,theme.snake2bottom), (1,theme.snake3bottom), (1,theme.snake4bottom), (1,theme.snakesbottom)]>,nuIl,null,GoTo'cRandom,[(l,snakes:2)]>,Me} Mary's sub-routine "snakes:1" context entry. This sets Mary's position to the bottom of the snake that she is on and then goes to "snakes:2".
{Mary,snakes:2,SetcRandom,me.posphrase[(1,'O')], me.posphrase+[(1,me.position)]>,nulI,SaycRandom,0,[(1 oh dear i have to go down the snake to square',me.posphrase)]>,GoTo<Random,[(l,Return)]>,Me} Mary's sub-routine "snakes:2" context entry. This sets Mary's posphrase reference to be the audio representation of her new position, then she says "Oh dear I have to go down the snake to square "position". Then she returns from the sub-routine.
{Mary,Iadders,nuII,If<me.position=[theme.Iadderl bottom, theme.Iadder2bottom, theme.Iadder3bottom, theme.Iadder4bottom, theme.Iaddersbottom]>,nuII,GoTo<Condition,[(1,Return),(1,Iadders:1)]>,Me} Mary's sub-routine "ladders" context entry. This chocks if Mary's position is at the bottom of any of the ladders. If it is she goes to "Iaddors:1 ", if not she returns from the sub-routine.
{Mary,Iadders: 1,Set<Condition,me.position=[(1,me.position), (1,theme.Iadderl top), -12 0- (1,theme.ladder2top), (1,theme.ladder3top), (1,theme.ladder4top), (1,theme.ladder5top)]>,null,null,GoTo<Random,[(1,ladders:2)]>,Me} Mary's sub-routine "ladders:1" context entry. This sets Mary's position to the top of the ladder that she is on and then goes to "ladders:2".
{Mary,Iadders:2,Set<Random,me.posphrase=[(1 0')], me.posphrase+[(1,me.position)]>,null,Say<Random,0,[(1,hurrah i can go up the ladder to square',me.posphrase)]>,GoTo<Random,[(l,Return)]>,Me} {Mary, end null, null, null, null, n ul l} Mary's sub-routine "ladders:2" context entry. This sets Mary's posphrase reference to be the audio representation of her new position, then she says "hurrah I can go up the ladder to square "position". The she returns from the sub-routine.
Claims (99)
- Claims User-doll interactions 1. A toy comprising: a processor; a memory for storing at least one group of data, each said at least one group comprising of a plurality of expressive responses, and each said group representing a respective theme; an output for said expressive responses; the toy being adapted to exchange such expressive responses with another such toy; means for receiving an instructive response from a user; and means for altering the exchange of expressive responses between the toys in dependence upon the received user instructive response.
- 2. A toy according to Claim 1 wherein the toy is adapted to receive an instructive response during the exchange of expressive responses, and the means for altering the exchange of expressive responses alters the subsequent exchange of expressive responses.
- 3. A toy according to Claim I or 2 wherein at least one expressive response includes a query.
- 4. A toy according to Claim 3 wherein the query is addressed to the user.
- 5. A toy according to Claim 3 or 4 wherein the toy is adapted to await an instructive response from a user following output of a query.
- 6. A toy according to Claim 5 wherein the toy is adapted to await the instructive response for a predetermined period.
- 7. A toy according to Claim 6 wherein the toy is adapted to continue to exchange expressive responses with the other such toy in the absence of an instructive response within the predetermined period.
- 8. A toy according to any of Claims 1 to 7 wherein the toy is adapted to exchange expressive responses representing a respective theme in -12 2-dependence on the instructive response from the user.
- 9. A toy according to any of Claims 1 to 8 further comprising means for receiving an instructive response from a user.
- 10. A toy according to Claim 9 wherein the means is at least one of: a button; a remote control; a touch screen; and a linked computing device.
- 11. A method of communication between first and second toys comprising: storing at least one group of data on each toy, each said at least one group comprising of a plurality of expressive responses, and each said group representing a respective theme; exchanging expressive responses between first and second toys; receiving an instructive response from a user; and altering the exchange of expressive responses between the toys in dependence upon the received user instructive response.Name Files
- 12.A toy adapted to interact with another such toy, the toy comprising: a processor; a memory for storing audio data; an output for outputting said audio data; means for receiving an identifier from the other such toy; and means for downloading audio data relating to said identifier for subsequent output by the toy.
- 13.A toy according to Claim 12, wherein the audio data relating to said identifier includes audio data specific to that other toy.
- 14.A toy according to Claim 13, wherein the specific audio data includes any one or more of the following types of personal data relating to that other toy: its name; place of birth; home town; a hobby or interest; and favourite colour or food.
- 15.A toy according to any of Claims l2to 14, wherein the identifier identifies any one or more of the following variables relating to that other toy: the specific toy; a name; place of birth; home town; a hobby or interest; and favourite -12 3-colour or food.
- 16.A toy according to any of Claims 12 to 15, wherein audio data is in a specific form dependent on an audio output setting of the toy.
- 17.A toy according to Claim 16, wherein the audio output setting is userselectable.
- 18.A toy according to any of Claims 15 to 17, wherein the audio output setting is a user selectable voice.
- 19.A toy according to Claim 18, wherein all audio data output by the toy is in theselectable voice.
- 20.A toy according to Claim 18 or 19, wherein all audio data stored by the toy is in the selectable voice.
- 21.A toy according to any of Claims 18 to 20, wherein the audio data relating to said identifier is in the selectable voice.
- 22.A toy according to any of Claims 12 to 21, wherein the toy is adapted to be connectable to a server thereby to download said audio data relating to said identifier from the server.
- 23.A toy according to Claim 22, wherein said identifier is stored in the toy memory.
- 24.A toy according to Claim 22 or 23, wherein said audio data relating to said identifier is associated with the identifier on the server.
- 25.Atoy according to any of Claims 12 to 24, wherein said audio data relating to said identifier is associated with and/or linked to the identifier in the toy memory.
- 26.A toy according to any of Claims 12 to 25, wherein the toy exchanges identifiers with the other such toy when it comes into contact with the other toy.
- 27.A toy according to any of Claims 12 to 26, further comprising means for transmitting an identifier relating to the toy to another such toy.
- 28.A toy according to any of Claims 12 to 27, wherein the toy further comprises means for modifying the audio data output to the other toy in dependence on whether or not the toy has stored thereon the audio data relating to said identifier for the other toy.
- 29.A toy according to any of Claims 12 to 28, wherein the audio data is in the form of expressive responses adapted to be exchanged between the toys.
- 30.A toy according to Claim 29, wherein the sequence of the expressive responses is adapted in dependence on whether or not the toy has stored thereon the audio data relating to said identifier for the other toy.
- 31.A toy according to Claim 29 or 30, wherein an expressive response is selected in dependence on whether or not the toy has stored thereon the audio data relating to said identifier for the other toy.
- 32.A toy according to any of Claims 12 to 31, further comprising means for initializing the toy with particular personal data associated with the toy.
- 33.A toy according to any of Claims 12 to 32, further comprising means for determining whether audio data relating to the received identifier is already stored in the toy memory.
- 34.A toy according to any of Claims 12 to 33, further comprising means for determining whether the received identifier is already stored in the toy memory.
- 35.A toy according to Claim 34, wherein if the received identifier is not stored, then the identifier is added to the memory for subsequent request of audio -12 5-data relating to the identifier.
- 36.A toy according to Claim 35, wherein upon receipt of audio data corresponding to an identifier the identifier is deleted.
- 37.A toy according to any of Claims 12 to 36, wherein the means for storing audio data is adapted to store a predetermined maximum quantity of audio data file relating to a predetermined maximum number of identifiers.
- 38.A toy according to any of Claims 12 to 37, wherein the processor is adapted to store audio data corresponding to an identifier of the toy itself.
- 39.A toy according to Claim 38, wherein the processor is adapted to prevent deletion of the audio data corresponding to an identifier of the toy itself.
- 40. A toy according to any of Claims 12 to 39, wherein the toy is a doll.
- 41. A system for providing audio data to interacting toys, the system comprising: a server for storing identifiers corresponding to each of the toys, and audio data relating to said identifiers; a plurality of toys adapted to interact with one another and exchange identifiers when coming into contact with one another; and wherein the toys are adapted to download from the server the audio data related to the identifiers for subsequent output by each of the toys.
- 42.A system according to Claim 41, wherein each toy is provided with an audio output setting, and wherein the audio data downloaded to a toy is related both to the audio output setting of that toy and the identifier of another toy.
- 43.A system according to Claim 41 or 42, further comprising means for storing the audio data relating to a said identifier in a plurality of audio data files, each one of said plurality of audio data files corresponding to a respective audio output setting.
- 44.A method of communication between first and second toys, the method comprising: -12 6-exchanging identifiers between the toys; and downloading audio data relating to said identifiers for subsequent output by the toys.Personality fitting -role selection
- 45.A toy adapted to interact with at least another such toy, the toy comprising: a memory for storing at least one group of data, each said at least one group of data comprising a plurality of expressive responses, and each said group representing a respective theme; an output for said expressive responses, the toy being adapted to exchange such expressive responses with other such toys; and moans for selecting certain of the expressive responses in dependence on a personality parameter associated with the toy.
- 46.A toy according to Claim 45, wherein the plurality of expressive responses in each said group of data is grouped together to define predetermined character roles within the theme, and wherein the selecting means is adapted to select a particular role in dependence on the personality parameter, and preferably wherein the expressive responses are grouped together to define the predetermined roles via role identifiers.
- 47.A toy according to Claim 45 or 46 wherein the personality parameter is a compound parameter consisting of a plurality of personality trait parameters.
- 48.A toy according to Claim 47 wherein the compound parameter consists of between 1 and 15 personality trait parameters, preferably between 3 and 12 personality trait parameters, and more preferably 8 personality trait parameters.
- 49. A toy according to Claim 47 or 48, wherein each personality trait parameter is a variable defining the level of said personality trait parameter, and preferably wherein said level is selectable and/or adjustable by a user.
- 50.A toy according to any of Claims 45 to 49 wherein the personality parameter is user-defined.-12 7-
- 51.A toy according to any of Claims 46 to 50 wherein each role is provided with an associated personality parameter, and wherein the selecting means is adapted to compare each of the role personality parameters in the theme with the toy's personality parameter and to select a role that matches the toy's personality parameter most closely.Personality fitting -authoring tool
- 52. An authoring tool for creating themed data for toys, comprising means for receiving content relating to a particular theme; means for associating at least a part of the content with a personality parameter; means for processing said content to generate a set of instructions for operating said toy within said particular theme; and means for outputting said set of instructions.
- 53.An authoring tool according to Claim 52, wherein the content is in the form of a plurality of expressive responses grouped together to define predetermined roles within a theme, and wherein a personality parameter is assigned to each role.
- 54.An authoring tool according to Claim 52 or 53 wherein the personality parameter is a compound parameter consisting of a plurality of personality trait parameters.
- 55.A toy according to Claim 54 wherein the compound parameter consists of between I and 15 personality trait parameters, preferably between 3 and 12 personality trait parameters, and more preferably 8 personality trait parameters.Script importation -authoring tool
- 56.An authoring tool for creating themed data for toys, comprising means for receiving content in the form of a scripted dialogue relating to a particular theme; means for processing said content to generate a set of instructions for operating said toy within said particular theme; and means for outputting said set of instructions.
- 57.An authoring tool according to Claim 56, further comprising means for -12 8-providing a plurality of user selectable content elements, and means for receiving a user selection of at least one of said content elements thereby to create said scripted dialogue.
- 58.An authoring tool according to Claim 57, further comprising a graphical user interface, and wherein the content elements are provided in the form of user selectable graphical indicia.
- 59.An authoring tool according to Claim 58, wherein the graphical user interface comprises a storyboard on to which content elements (optionally) in the form of expressive responses can be dragged and dropped.
- 60.An authoring tool according to Claim 59, wherein the storyboard comprises a plurality of panels.
- 61.An authoring tool according to Claim 59 or 60, wherein at least one panel comprises means for setting the theme and/or at least one character.
- 62.An authoring tool according to Claim 60 or 61, wherein at least one panel provides a placeholder for a content item (preferably an expressive response) associated with a character, and wherein the authoring tool is adapted to receive a user-selected expressive response and replace the placeholder with the user-selected expressive response.
- 63.An authoring tool according to Claim 61 or 62, wherein the authoring tool is adapted to provide a plurality of potential expressive responses for user selection.
- 64.An authoring tool according to Claim 63, wherein the authoring tool is adapted to provide a plurality of potential expressive responses in dependence on at least one of: the panel; the theme setting; the character; and prior user-selected expressive responses.
- 65.An authoring tool according to Claim 63 or 64, wherein the plurality of potential expressive responses provided is filtered in dependence on at least -12 9-one of: the panel; the setting; a character; a user-selected expressive response.
- 66. An authoring tool according to any of Claims 63 to 65, wherein the authoring tool is adapted to receive a user input and adapt at least one of the potential expressive responses to comprise the user input.
- 67.A method of creating themed data for toys, comprising receiving content in the form of a scripted dialogue relating to a particular theme; processing said content to generate a set of instructions for operating said toy within said particular theme; and outputting said set of instructions.
- 68.A method according to Claim 67, further comprising providing a plurality of user selectable content elements, and receiving a user selection of at least one of said content elements thereby to create said scripted dialogue.
- 69.A method according to Claim 68, wherein the user selectable content elements are selected via a graphical user interface.
- 70.A method according to Claims 67 to 69, wherein the content element is at least one of: an expressive response; a role; a number of participating roles; a theme; and a topic.
- 71.An apparatus for creating themed data for toys, comprising means for receiving content in the form of a scripted dialogue relating to a particular theme; means for processing said content to generate a set of instructions for operating said toy within said particular theme; and means for outputting said set of instructions.
- 72. An apparatus according to Claim 71, further comprising means for providing a plurality of user selectable content elements, and means for receiving a user selection of at least one of said content elements thereby to create said scripted dialogue.
- 73.An apparatus according to Claim 72, further comprising a graphical user -13 0-interface, and wherein the content elements are provided in the form of user selectable graphical indicia.Conditional flow -authoring tool
- 74.An authoring tool for creating themed data for toys, comprising means for receiving content relating to a particular theme; means for processing said content to generate a plurality of different conversations each based on a set of expressive responses relating to a theme, wherein the conversations vary in dependence on a conversation condition; means for generating a set of instructions for operating said toys within said particular theme; and means for outputting said set of instructions.
- 75.An authoring tool according to Claim 74, wherein the conversation condition is at least one or more of the following: the nature or type of toy; a character role within the theme; the nature and/or type of theme; an attribute of the toy; a prior conversation sequence; a dialogue parameter; and a personality parameter associates with the toy and/or a character role within the theme.
- 76.An authoring tool according to Claim 74 or 75, wherein the conversation condition relates to the number of toys or character roles participating in the conversation.
- 77.An authoring tool according to Claim 76, wherein the conversation is tested in a pre-determined sequence for each of a plurality of toys or character roles.Audio synthesis -authoring tool
- 78.An authoring tool for creating themed data for toys, comprising means for receiving content relating to a particular theme; means for processing said content to generate a set of instructions for operating said toy within said particular theme; means for synthesising audio data relating to said content; and means for outputting said set of instructions.
- 79.An authoring tool according to Claim 78 wherein the synthesising means is adapted to synthesis audio output in a plurality of synthetic voices.Communication interface
- 80.A communication interface for connecting a toy with a remote server, comprising means for detecting the toy; means for receiving an identification which identifies the toy; means for forwarding the identification on to the remote server; and means for transferring data between the remote server and the toy.
- 81. A communication interface according to Claim 80, wherein the communication interface is adapted to execute on a computing device.
- 82. A communication interface according to Claim 81, wherein the communication interface is adapted to execute as a background process on the computing device.
- 83.A communication interface according to Claim 81 or 82, wherein the communication interface is adapted to run in the notification area or system tray.
- 84.A communication interface according to any of Claim 80 to 83, wherein the communication interface is adapted to synchronise data stored on the toy with data associated with the toy and stored on the sewer.
- 85.A communication interface according to any of Claims 80 to 84, wherein the server includes a website.
- 86.A communication interface according to any of Claims 80 to 85, wherein the communication interface is adapted to receive from the sewer an indication of whether the toy is legitimate in response to forwarding the identification.
- 87.A communication interface according to any of Claims 80 to 86, wherein the communication interface is adapted to receive from the server an indication of whether the toy is registered at the sewer in response to forwarding the identification.
- 88. A communication interface according to Claim 87, wherein the communication -13 2-interface is adapted to initiate registration if the toy is not registered at the server.
- 89.A communication interface according to any of Claims 80 to 88, wherein the identification includes a user identifier and a toy identifier.
- 90. A communication interface according to Claim 89, wherein the communication interface is adapted to receive from the server an indication of whether the toy is registered to the user at the server in response to forwarding the user identifier and toy identifier.
- 91.A communication interface according to any of Claims 80 to 90, wherein the communication interface is adapted to receive toy characteristics data from the toy and forward the toy characteristics data on to the remote server.
- 92.A communication interface according to Claim 91, wherein the toy characteristics data includes data relating to one, some, or all of the following: conversation participation count; count of instances of undertaking a particular activity; and count of instances of speaking a particular phrase.
- 93.A communication interface according to Claim 91 or 92, wherein the toy characteristics data includes identifiers of required data, preferably identifiers of required audio data, required variable audio data, required name audio data, and/or references to required themed data.
- 94. A communication interface according to Claim 93, wherein the communication interface is adapted to forward the identifiers of required data to the server, receive the required data, and dispatch the required data to the toy.
- 95. A communication interface according to any of Claims 80 to 94, wherein the communication interface is adapted to receive toy settings from the server, receive toy settings from the toy, determine whether there is a difference, and if there is a difference then dispatch the updated toy settings to the toy.
- 96. A communication interface according to Claim 95, wherein the communication -13 3-interface is adapted to receive a toy settings update from a user input and forward the toy settings update on to the remote server.
- 97.A communication interface according to Claim 95 or 96, wherein the toy settings include data relating to one, some, or all of: a toy name; a toy variable; a toy personality; and a toy voice.H bridge circuit arrangement
- 98.An H-bridge circuit arrangement comprising: a pair of bipolar transistors and a pair of field-effect transistors, arranged such that each side of the H-bridge comprises a bipolar transistor and a field-effect transistor; and a pair of reverse-biased diodes, each of the reverse-biased diodes being connected between the base of a respective one of the bipolar transistors and signal ground; such that, in the event of a given bipolar transistor being subjected to polarity reversal, its base potential is substantially the same as its emitter potential such that it does not come into a state of conduction.
- 99. An H-bridge circuit arrangement as claimed in Claim 98 wherein, on each side of the H-bridge, the collector of the bipolar transistor is connected to the drainof the accompanying field-effect transistor.Claims are truncated...
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1222755.9A GB2511479A (en) | 2012-12-17 | 2012-12-17 | Interacting toys |
US13/784,075 US20140170929A1 (en) | 2012-12-17 | 2013-03-04 | Interacting toys |
PCT/GB2013/053330 WO2014096812A2 (en) | 2012-12-17 | 2013-12-17 | Interacting toys |
EP13821904.3A EP2941308A2 (en) | 2012-12-17 | 2013-12-17 | Interacting toys |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1222755.9A GB2511479A (en) | 2012-12-17 | 2012-12-17 | Interacting toys |
Publications (2)
Publication Number | Publication Date |
---|---|
GB201222755D0 GB201222755D0 (en) | 2013-01-30 |
GB2511479A true GB2511479A (en) | 2014-09-10 |
Family
ID=47630883
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB1222755.9A Withdrawn GB2511479A (en) | 2012-12-17 | 2012-12-17 | Interacting toys |
Country Status (4)
Country | Link |
---|---|
US (1) | US20140170929A1 (en) |
EP (1) | EP2941308A2 (en) |
GB (1) | GB2511479A (en) |
WO (1) | WO2014096812A2 (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9174116B2 (en) * | 2012-09-28 | 2015-11-03 | Intel Corporation | System for developing, managing, acquiring and playing electronic board games |
US10335702B2 (en) * | 2015-07-20 | 2019-07-02 | Brixo Smart Toys Ltd. | Circuit building system |
DE102015011802A1 (en) * | 2015-09-17 | 2017-03-23 | Multiplex Modellsport Gmbh & Co. Kg | Identification procedure for remote-controlled devices |
US10272349B2 (en) * | 2016-09-07 | 2019-04-30 | Isaac Davenport | Dialog simulation |
CN107657471B (en) * | 2016-09-22 | 2021-04-30 | 腾讯科技(北京)有限公司 | Virtual resource display method, client and plug-in |
US10111035B2 (en) | 2016-10-03 | 2018-10-23 | Isaac Davenport | Real-time proximity tracking using received signal strength indication |
US10981073B2 (en) * | 2018-10-22 | 2021-04-20 | Disney Enterprises, Inc. | Localized and standalone semi-randomized character conversations |
CN112198816B (en) * | 2020-09-06 | 2022-02-18 | 南京乐服智慧科技有限公司 | Multi-equipment interaction system and interaction method based on script |
Citations (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1985002529A1 (en) * | 1983-12-12 | 1985-06-20 | Sri International | Data compression system and method for processing digital sample signals |
WO1986000745A1 (en) * | 1984-07-16 | 1986-01-30 | Mcwhirter Holdings Pty. Limited | Recorded information verification system |
US5394274A (en) * | 1988-01-22 | 1995-02-28 | Kahn; Leonard R. | Anti-copy system utilizing audible and inaudible protection signals |
WO1999017854A1 (en) * | 1997-10-07 | 1999-04-15 | Health Hero Network, Inc. | Remotely programmable talking toy |
WO2000015316A2 (en) * | 1998-09-16 | 2000-03-23 | Comsense Technologies, Ltd. | Interactive toys |
WO2000031613A1 (en) * | 1998-11-26 | 2000-06-02 | Creator Ltd. | Script development systems and methods useful therefor |
WO2001012285A1 (en) * | 1999-08-19 | 2001-02-22 | Kidkids, Inc. | Networked toys |
US6304761B1 (en) * | 1996-05-31 | 2001-10-16 | Matsushita Electric Industrial Co., Ltd. | Mobile unit communication apparatus having digital and analog communication modes and a method of controlling the same |
US20010032278A1 (en) * | 1997-10-07 | 2001-10-18 | Brown Stephen J. | Remote generation and distribution of command programs for programmable devices |
US20010034180A1 (en) * | 1997-04-09 | 2001-10-25 | Fong Peter Sui Lun | Interactive talking dolls |
US20020049606A1 (en) * | 2000-05-16 | 2002-04-25 | Lg Electronics Inc. | Interactive learning device using web-based system and method therefor |
US20030173922A1 (en) * | 2002-03-13 | 2003-09-18 | Pelonis Kosta L. | D.c. motor bridge coil driver |
US20040128514A1 (en) * | 1996-04-25 | 2004-07-01 | Rhoads Geoffrey B. | Method for increasing the functionality of a media player/recorder device or an application program |
EP1493154A1 (en) * | 2002-03-28 | 2005-01-05 | Koninklijke Philips Electronics N.V. | Time domain watermarking of multimedia signals |
GB2423943A (en) * | 2005-04-26 | 2006-09-13 | Steven Lipman | Communicating Toy |
WO2006114625A2 (en) * | 2005-04-26 | 2006-11-02 | Steven Lipman | Toys |
WO2009010760A2 (en) * | 2007-07-19 | 2009-01-22 | Steven Lipman | Interacting toys |
US20090117816A1 (en) * | 2007-11-07 | 2009-05-07 | Nakamura Michael L | Interactive toy |
EP2088705A1 (en) * | 2000-02-17 | 2009-08-12 | Microsoft Corporation | System and method for protecting data streams in hardware components |
EP1444695B1 (en) * | 2001-11-07 | 2009-09-09 | Koninklijke Philips Electronics N.V. | Apparatus for and method of preventing illicit copying of digital content |
WO2010007336A1 (en) * | 2008-07-18 | 2010-01-21 | Steven Lipman | Interacting toys |
US20100125695A1 (en) * | 2008-11-15 | 2010-05-20 | Nanostar Corporation | Non-volatile memory storage system |
US20110124916A1 (en) * | 2009-11-19 | 2011-05-26 | Kyushu University, National University Corporation | Thermal responsive molecule |
US7970927B1 (en) * | 2009-12-31 | 2011-06-28 | Qlogic, Corporation | Concurrent transmit processing |
US20110307677A1 (en) * | 2008-10-24 | 2011-12-15 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Device for managing data buffers in a memory space divided into a plurality of memory elements |
US20120096225A1 (en) * | 2010-10-13 | 2012-04-19 | Microsoft Corporation | Dynamic cache configuration using separate read and write caches |
CN202261070U (en) * | 2011-10-09 | 2012-05-30 | 郑州朗睿科技有限公司 | H-bridge inverter circuit |
US20120144109A1 (en) * | 2010-12-07 | 2012-06-07 | International Business Machines Corporation | Dynamic adjustment of read/write ratio of a disk cache |
WO2014065036A1 (en) * | 2012-10-26 | 2014-05-01 | 富士フイルム株式会社 | Voice coil motor (vcm) drive device and portable terminal |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6663393B1 (en) * | 1999-07-10 | 2003-12-16 | Nabil N. Ghaly | Interactive play device and method |
US6719604B2 (en) * | 2000-01-04 | 2004-04-13 | Thinking Technology, Inc. | Interactive dress-up toy |
US6443796B1 (en) * | 2000-06-19 | 2002-09-03 | Judith Ann Shackelford | Smart blocks |
AU2001277640A1 (en) * | 2000-07-01 | 2002-01-14 | Alexander V. Smirnov | Interacting toys |
TW572767B (en) * | 2001-06-19 | 2004-01-21 | Winbond Electronics Corp | Interactive toy |
US6800013B2 (en) * | 2001-12-28 | 2004-10-05 | Shu-Ming Liu | Interactive toy system |
US7905759B1 (en) * | 2003-10-07 | 2011-03-15 | Ghaly Nabil N | Interactive play set |
US8157611B2 (en) * | 2005-10-21 | 2012-04-17 | Patent Category Corp. | Interactive toy system |
US8324492B2 (en) * | 2006-04-21 | 2012-12-04 | Vergence Entertainment Llc | Musically interacting devices |
US8287327B1 (en) * | 2006-08-02 | 2012-10-16 | Ghaly Nabil N | Interactive play set |
US8306509B2 (en) * | 2007-08-31 | 2012-11-06 | At&T Mobility Ii Llc | Enhanced messaging with language translation feature |
US8926395B2 (en) * | 2007-11-28 | 2015-01-06 | Patent Category Corp. | System, method, and apparatus for interactive play |
US8172637B2 (en) * | 2008-03-12 | 2012-05-08 | Health Hero Network, Inc. | Programmable interactive talking device |
CN101693143B (en) * | 2009-09-30 | 2012-08-22 | 汕头市粤成动游网络科技有限公司 | Method for combining online game with toy |
EP2613855A4 (en) * | 2010-09-09 | 2014-12-31 | Tweedletech Llc | A board game with dynamic characteristic tracking |
US8737677B2 (en) * | 2011-07-19 | 2014-05-27 | Toytalk, Inc. | Customized audio content relating to an object of interest |
-
2012
- 2012-12-17 GB GB1222755.9A patent/GB2511479A/en not_active Withdrawn
-
2013
- 2013-03-04 US US13/784,075 patent/US20140170929A1/en not_active Abandoned
- 2013-12-17 EP EP13821904.3A patent/EP2941308A2/en not_active Withdrawn
- 2013-12-17 WO PCT/GB2013/053330 patent/WO2014096812A2/en active Application Filing
Patent Citations (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1985002529A1 (en) * | 1983-12-12 | 1985-06-20 | Sri International | Data compression system and method for processing digital sample signals |
WO1986000745A1 (en) * | 1984-07-16 | 1986-01-30 | Mcwhirter Holdings Pty. Limited | Recorded information verification system |
US5394274A (en) * | 1988-01-22 | 1995-02-28 | Kahn; Leonard R. | Anti-copy system utilizing audible and inaudible protection signals |
US20040128514A1 (en) * | 1996-04-25 | 2004-07-01 | Rhoads Geoffrey B. | Method for increasing the functionality of a media player/recorder device or an application program |
US6304761B1 (en) * | 1996-05-31 | 2001-10-16 | Matsushita Electric Industrial Co., Ltd. | Mobile unit communication apparatus having digital and analog communication modes and a method of controlling the same |
US20010034180A1 (en) * | 1997-04-09 | 2001-10-25 | Fong Peter Sui Lun | Interactive talking dolls |
US20010032278A1 (en) * | 1997-10-07 | 2001-10-18 | Brown Stephen J. | Remote generation and distribution of command programs for programmable devices |
WO1999017854A1 (en) * | 1997-10-07 | 1999-04-15 | Health Hero Network, Inc. | Remotely programmable talking toy |
WO2000015316A2 (en) * | 1998-09-16 | 2000-03-23 | Comsense Technologies, Ltd. | Interactive toys |
WO2000031613A1 (en) * | 1998-11-26 | 2000-06-02 | Creator Ltd. | Script development systems and methods useful therefor |
WO2001012285A1 (en) * | 1999-08-19 | 2001-02-22 | Kidkids, Inc. | Networked toys |
EP2088705A1 (en) * | 2000-02-17 | 2009-08-12 | Microsoft Corporation | System and method for protecting data streams in hardware components |
US20020049606A1 (en) * | 2000-05-16 | 2002-04-25 | Lg Electronics Inc. | Interactive learning device using web-based system and method therefor |
EP1444695B1 (en) * | 2001-11-07 | 2009-09-09 | Koninklijke Philips Electronics N.V. | Apparatus for and method of preventing illicit copying of digital content |
US20030173922A1 (en) * | 2002-03-13 | 2003-09-18 | Pelonis Kosta L. | D.c. motor bridge coil driver |
EP1493154A1 (en) * | 2002-03-28 | 2005-01-05 | Koninklijke Philips Electronics N.V. | Time domain watermarking of multimedia signals |
GB2423943A (en) * | 2005-04-26 | 2006-09-13 | Steven Lipman | Communicating Toy |
WO2006114625A2 (en) * | 2005-04-26 | 2006-11-02 | Steven Lipman | Toys |
WO2009010760A2 (en) * | 2007-07-19 | 2009-01-22 | Steven Lipman | Interacting toys |
US20090117816A1 (en) * | 2007-11-07 | 2009-05-07 | Nakamura Michael L | Interactive toy |
WO2010007336A1 (en) * | 2008-07-18 | 2010-01-21 | Steven Lipman | Interacting toys |
US20110307677A1 (en) * | 2008-10-24 | 2011-12-15 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Device for managing data buffers in a memory space divided into a plurality of memory elements |
US20100125695A1 (en) * | 2008-11-15 | 2010-05-20 | Nanostar Corporation | Non-volatile memory storage system |
US20110124916A1 (en) * | 2009-11-19 | 2011-05-26 | Kyushu University, National University Corporation | Thermal responsive molecule |
US7970927B1 (en) * | 2009-12-31 | 2011-06-28 | Qlogic, Corporation | Concurrent transmit processing |
US20120096225A1 (en) * | 2010-10-13 | 2012-04-19 | Microsoft Corporation | Dynamic cache configuration using separate read and write caches |
US20120144109A1 (en) * | 2010-12-07 | 2012-06-07 | International Business Machines Corporation | Dynamic adjustment of read/write ratio of a disk cache |
CN202261070U (en) * | 2011-10-09 | 2012-05-30 | 郑州朗睿科技有限公司 | H-bridge inverter circuit |
WO2014065036A1 (en) * | 2012-10-26 | 2014-05-01 | 富士フイルム株式会社 | Voice coil motor (vcm) drive device and portable terminal |
Also Published As
Publication number | Publication date |
---|---|
EP2941308A2 (en) | 2015-11-11 |
WO2014096812A2 (en) | 2014-06-26 |
US20140170929A1 (en) | 2014-06-19 |
WO2014096812A3 (en) | 2014-10-30 |
GB201222755D0 (en) | 2013-01-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
GB2511479A (en) | Interacting toys | |
JP5628029B2 (en) | Interactive toys | |
US8324492B2 (en) | Musically interacting devices | |
US20210308592A1 (en) | Interactive toy providing dynamic, navigable media content | |
US10643482B2 (en) | Fill-in-the-blank audio-story engine | |
JP5404781B2 (en) | Interactive toys | |
JP2011528246A5 (en) | ||
van Stegeren et al. | Fantastic strings and where to find them: The quest for high-quality video game text corpora | |
CN111105776A (en) | Audio playing device and playing method thereof | |
CN107908709A (en) | Parent-child language chat interaction method, device and system | |
CN108687779A (en) | A kind of the dancing development approach and system of domestic robot | |
Engström et al. | Using text-to-speech to prototype game dialog | |
Angulo et al. | Aibo jukeBox–A robot dance interactive experience | |
JP2018159779A (en) | Voice reproduction mode determination device, and voice reproduction mode determination program | |
US9180370B2 (en) | Methods and apparatus for acoustic model based soundtracks | |
Beauchemin | A Rare Breed | |
Bhutada | Universal Event and Motion Editor for Robots' Theatre | |
Liukkala | Creating Two Audio Features with Wwise: Red Stage Entertainment | |
Kurlander | Persona: An Architecture for Animated Agent Interfaces | |
WO2003007273A2 (en) | Seemingly teachable toys | |
Johnston | The Sound of Civilization: Music in Terry Nation's Survivors | |
Marynowsky | An exploration of the uncanny in autonomous artworks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
REG | Reference to a national code |
Ref country code: HK Ref legal event code: DE Ref document number: 1201775 Country of ref document: HK |
|
732E | Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977) |
Free format text: REGISTERED BETWEEN 20170525 AND 20170531 |
|
REG | Reference to a national code |
Ref country code: HK Ref legal event code: WD Ref document number: 1201775 Country of ref document: HK |
|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |