WO2019040343A1 - Parapluies intelligents et systèmes robotiques - Google Patents

Parapluies intelligents et systèmes robotiques Download PDF

Info

Publication number
WO2019040343A1
WO2019040343A1 PCT/US2018/047010 US2018047010W WO2019040343A1 WO 2019040343 A1 WO2019040343 A1 WO 2019040343A1 US 2018047010 W US2018047010 W US 2018047010W WO 2019040343 A1 WO2019040343 A1 WO 2019040343A1
Authority
WO
WIPO (PCT)
Prior art keywords
noise
files
audio
audio files
computer
Prior art date
Application number
PCT/US2018/047010
Other languages
English (en)
Inventor
Armen Gharabegian
Original Assignee
Shadecraft, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/681,377 external-priority patent/US20180190257A1/en
Application filed by Shadecraft, Inc. filed Critical Shadecraft, Inc.
Publication of WO2019040343A1 publication Critical patent/WO2019040343A1/fr

Links

Classifications

    • AHUMAN NECESSITIES
    • A45HAND OR TRAVELLING ARTICLES
    • A45BWALKING STICKS; UMBRELLAS; LADIES' OR LIKE FANS
    • A45B23/00Other umbrellas
    • AHUMAN NECESSITIES
    • A45HAND OR TRAVELLING ARTICLES
    • A45BWALKING STICKS; UMBRELLAS; LADIES' OR LIKE FANS
    • A45B2200/00Details not otherwise provided for in A45B
    • A45B2200/10Umbrellas; Sunshades
    • A45B2200/1009Umbrellas; Sunshades combined with other objects
    • A45B2200/1018Umbrellas; Sunshades combined with other objects with illuminating devices, e.g. electrical
    • AHUMAN NECESSITIES
    • A45HAND OR TRAVELLING ARTICLES
    • A45BWALKING STICKS; UMBRELLAS; LADIES' OR LIKE FANS
    • A45B2200/00Details not otherwise provided for in A45B
    • A45B2200/10Umbrellas; Sunshades
    • A45B2200/1009Umbrellas; Sunshades combined with other objects
    • A45B2200/1027Umbrellas; Sunshades combined with other objects with means for generating solar energy
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04BTRANSMISSION
    • H04B15/00Suppression or limitation of noise or interference
    • H04B15/02Reducing interference from electric apparatus by means located at or near the interfering apparatus
    • H04B15/04Reducing interference from electric apparatus by means located at or near the interfering apparatus the interference being caused by substantially sinusoidal oscillations, e.g. in a receiver or in a tape-recorder

Definitions

  • Figure 1 illustrates a mobile communications device, a third party voice recognition and/or artificial intelligence server, and/or an intelligent umbrella/robotic shading system in a noise-filled environment according to embodiments;
  • Figure 2 illustrates a flowchart of an intelligent umbrella noise reduction or cancellation process utilizing a mobile communications device or an intelligent umbrella or robotic shading system according to embodiments
  • Figures 3A and 3B illustrates impact of a noise reduction or cancellation process on voice command audio files for intelligent umbrellas and/or robotic shading systems according to embodiments
  • Figure 4 illustrates a microphone and/or LED array in an AI device housing according to embodiments
  • Figure 5A illustrates an shading system including an artificial intelligence engine and/or artificial intelligence interface
  • Figure 5B illustrates a block and dataflow diagram of communications between a shading system and/or one or more external AI servers according to embodiments
  • Figure 6 illustrates an intelligent shading system comprising a shading housing wherein a shading housing comprises an AI or a noise reduction API according to embodiments.
  • references throughout this specification to one implementation, an implementation, one embodiment, embodiments, an embodiment and/or the like means that a particular feature, structure, and/or characteristic described in connection with a particular implementation and/or embodiment is included in at least one implementation and/or embodiment of claimed subject matter.
  • appearances of such phrases, for example, in various places throughout this specification are not necessarily intended to refer to the same implementation or to any one particular implementation described.
  • particular features, structures, and/or characteristics described are capable of being combined in various ways in one or more implementations and, therefore, are within intended claim scope, for example. In general, of course, these and other issues vary with context. Therefore, particular context of description and/or usage provides helpful guidance regarding inferences to be drawn.
  • a network may comprise two or more computing devices and/or may couple network devices so that signal
  • communications such as in the form of signal packets and/or frames (e.g., comprising one or more signal samples), for example, may be exchanged, such as between a server and a client device and/or other types of devices, including between wireless devices coupled via a wireless network, for example.
  • a network may comprise two or more network and/or computing devices and/or may couple network and/or computing devices so that signal
  • communications such as in the form of signal packets, for example, may be exchanged, such as between a server and a client device and/or other types of devices, including between wireless devices coupled via a wireless network, for example.
  • computing device refers to any device capable of communicating via and/or as part of a network. While computing devices may be capable of sending and/or receiving signals (e.g., signal packets and/or frames), such as via a wired and/or wireless network, they may also be capable of performing arithmetic and/or logic operations, processing and/or storing signals (e.g., signal samples), such as in memory as physical memory states, and/or may, for example, operate as a server in various embodiments.
  • signals e.g., signal packets and/or frames
  • signals e.g., signal samples
  • memory physical memory states
  • Computing devices, mobile computing devices, and/or network devices capable of operating as a server, or otherwise may include, as examples, rack- mounted servers, desktop computers, laptop computers, set top boxes, tablets, netbooks, smart phones, wearable devices, integrated devices combining two or more features of the foregoing devices, the like or any combination thereof.
  • signal packets and/or frames may be exchanged, such as between a server and a client device and/or other types of network devices, including between wireless devices coupled via a wireless network, for example.
  • server, server device, server computing device, server computing platform and/or similar terms are used interchangeably.
  • client, client device, client computing device, client computing platform and/or similar terms are also used interchangeably.
  • references to a“database” are understood to mean, one or more databases, database servers, application data servers, proxy servers, and/or portions thereof, as appropriate.
  • a network device may be embodied and/or described in terms of a computing device and/or mobile computing device.
  • this description should in no way be construed that claimed subject matter is limited to one embodiment, such as a computing device or a network device, and, instead, may be embodied as a variety of devices or combinations thereof, including, for example, one or more illustrative examples.
  • Operations and/or processing such as in association with networks, such as computing and/or communications networks, for example, may involve physical manipulations of physical quantities.
  • these quantities may take the form of electrical and/or magnetic signals capable of, for example, being stored, transferred, combined, processed, compared and/or otherwise manipulated. It has proven convenient, at times, principally for reasons of common usage, to refer to these signals as bits, data, values, elements, symbols, characters, terms, numbers, numerals and/or the like.
  • Coupled is used generically to indicate that two or more components, for example, are in direct physical, including electrical, contact; while,“coupled” is used generically to mean that two or more components are potentially in direct physical, including electrical, contact; however, “coupled” is also used generically to also mean that two or more components are not necessarily in direct contact, but nonetheless are able to co-operate and/or interact.
  • the term “coupled” is also understood generically to mean indirectly connected, for example, in an appropriate context.
  • signals, instructions, and/or commands are transmitted from one component (e.g., a controller or processor) to another component (or assembly), it is understood that messages, signals, Instructions, and/or commands may be transmitted directly to a component, or may pass through a number of other components on a way to a destination component.
  • a signal transmitted from a motor controller or processor to a motor ⁇ or other driving assembly may pass through glue logic, an amplifier, an analog-to-digital converter, a digital-to-analog converter, another controller and/or processor, and/or an interface.
  • a signal communicated through a misting system may pass through an air conditioning and/or a heating module
  • a signal communicated from any one or a number of sensors to a controller and/or processor may pass through a conditioning module, an analog-to-digital controller, and/or a comparison module, and/or a number of other electrical assemblies and/or components.
  • a network may also include for example, past, present and/or future mass storage, such as network attached storage (NAS), cloud storage, a storage area network (SAN), cloud storage, cloud server farms, and/or other forms of computing and/or device readable media, for example.
  • a network may include a portion of the Internet, one or more local area networks (LANs), one or more wide area networks (WANs), wire-line type connections, one or more personal area networks (PANs), wireless type connections, one or more mesh networks, one or more cellular communication networks, other connections, or any combination thereof.
  • LANs local area networks
  • WANs wide area networks
  • PANs personal area networks
  • wireless type connections one or more mesh networks
  • mesh networks one or more cellular communication networks, other connections, or any combination thereof.
  • a network may be worldwide in scope and/or extent.
  • the Internet and/or a global communications network may refer to a decentralized global network of interoperable networks that comply with the Internet Protocol (IP). It is noted that there are several versions of the Internet Protocol. Here, the term Internet Protocol, IP, and/or similar terms, is intended to refer to any version, now known and/or later developed of the Internet Protocol.
  • the Internet may include local area networks (LANs), wide area networks (WANs), wireless networks, and/or long haul public networks that, for example, may allow signal packets and/or frames to be communicated between LANs.
  • LANs local area networks
  • WANs wide area networks
  • wireless networks wireless networks
  • long haul public networks that, for example, may allow signal packets and/or frames to be communicated between LANs.
  • WWW or Web World Wide Web and/or similar terms may also be used, although it refers to a part of the Internet that complies with the Hypertext Transfer Protocol (HTTP).
  • HTTP Hypertext Transfer Protocol
  • network devices and/or computing devices may engage in an HTTP session through an exchange of appropriately compatible and/or compliant signal packets and/or frames.
  • Hypertext Transfer Protocol, HTTP, and/or similar terms is intended to refer to any version, now known and/or later developed. It is likewise noted that in various places in this document substitution of the term Internet with the term World Wide Web (‘Web’) may be made without a significant departure in meaning and may, therefore, not be inappropriate in that the statement would remain correct with such a substitution.
  • Web World Wide Web
  • the Internet and/or the Web may without limitation provide a useful example of an embodiment at least for purposes of illustration.
  • the Internet and/or the Web may comprise a worldwide system of interoperable networks, including interoperable devices within those networks.
  • a content delivery server and/or the Internet and/or the Web may comprise an service that organizes stored content, such as, for example, text, images, video, etc., through the use of hypermedia, for example.
  • HTML HyperText Markup Language
  • CSS Cascading Style Sheets
  • XML Extensible Markup Language
  • HTML and/or XML are merely example languages provided as illustrations and intended to refer to any version, now known and/or developed at another time and claimed subject matter is not intended to be limited to examples provided as illustrations, of course.
  • one or more parameters may be descriptive of a collection of signal samples, such as one or more electronic documents, and exist in the form of physical signals and/or physical states, such as memory states.
  • one or more parameters such as referring to an electronic document comprising an image, may include parameters, such as 1) time of day at which an image was captured, latitude and longitude of an image capture device, such as a camera; 2) time and day of when a sensor reading (e.g., humidity, temperature, air quality, UV radiation) was received; and/or 3) operating conditions of one or more motors or other components or assemblies in a modular umbrella shading system.
  • Claimed subject matter is intended to embrace meaningful, descriptive parameters in any format, so long as the one or more parameters comprise physical signals and/or states, which may include, as parameter examples, name of the collection of signals and/or states.
  • a modular umbrella shading system may comprise a computing device installed within or as part of a modular umbrella system, intelligent umbrella and/or intelligent shading charging system.
  • Algorithmic descriptions or symbolic representations are examples of techniques used by those of ordinary skill in the signal processing or related arts to convey the substance of their work to others skilled in the art.
  • An algorithm is here, and generally, considered to be a self- consistent sequence of operations or similar signal processing leading to a desired result.
  • operations or processing involve physical manipulation of physical quantities. Typically, although not necessarily, such quantities may take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared or otherwise manipulated.
  • a special purpose computer or a similar special purpose electronic computing device is capable of manipulating or transforming signals (electronic and/or magnetic) in memories (or components thereof), other storage devices, transmission devices sound reproduction devices, and/or display devices.
  • a controller and/or a processor typically performs a series of instructions resulting in data manipulation.
  • a processor and/or a processor typically performs a series of instructions resulting in data manipulation.
  • microcontroller or microprocessor may be a compact microcomputer designed to govern the operation of embedded systems in electronic devices, e.g., an intelligent, automated shading object or umbrella, intelligent umbrella, robotic shading systems, and/or shading charging systems, and various other electronic and mechanical devices coupled thereto or installed thereon.
  • Microcontrollers may include processors, microprocessors, and other electronic components.
  • Controller may be a commercially available processor such as an Intel Pentium, Motorola PowerPC, SGI MIPS, Sun UltraSPARC, or Hewlett-Packard PA-RISC processor, but may be any type of application-specific and/or specifically designed processor or controller.
  • a processor and/or controller may be connected to other system elements, including one or more memory devices, by a bus, a mesh network or other mesh components.
  • a processor or controller may execute an operating system which may be, for example, a Windows-based operating system (Microsoft), a MAC OS System X operating system (Apple Computer), one of many Linux-based operating system distributions (e.g., an open source operating system) a Solaris operating system (Sun), a portable electronic device operating system (e.g., mobile phone operating systems), microcomputer operating systems, and/or a UNIX operating systems.
  • Windows-based operating system Microsoft
  • MAC OS System X operating system Apple Computer
  • Linux-based operating system distributions e.g., an open source operating system
  • Solaris operating system Sun
  • portable electronic device operating system e.g., mobile phone operating systems
  • microcomputer operating systems e.g., a UNIX operating systems.
  • Embodiments are not limited to any particular implementation and/or operating system.
  • the specification may refer to an intelligent umbrella / robotic shading system (or an intelligent shading object or an intelligent umbrella) as an apparatus that provides shade and/or coverage to a user from weather elements such as sun, wind, rain, and/or hail.
  • the intelligent umbrella may be an automated intelligent shading object, automated intelligent umbrella, standalone intelligent umbrella, and/or automated intelligent shading charging system.
  • the robotic shading system may also be referred to as a parasol, intelligent umbrella, sun shade, outdoor shade furniture, sun screen, sun shelter, awning, sun cover, sun marquee, brolly and other similar names, which may all be utilized interchangeably in this application.
  • Shading objects and/or robotic shading systems which also have electric vehicle charging capabilities may also be referred to as intelligent umbrella charging systems. These terms may be utilized interchangeably throughout the specification.
  • the robotic shading systems, shading objects, intelligent umbrellas, umbrellas and/or parasols described herein comprises many novel and non-obvious features, which are described in detail below.
  • Figure 1 illustrates a mobile communications device, a third party voice recognition and/or artificial intelligence server, and/or an intelligent umbrella/robotic shading system in a noise-filled environment according to embodiments.
  • a mobile device 105 may comprise one or more processors 108, one or more memory modules 106, one or more microphones 109, and/or computer- readable instructions 107 stored in the one or more memory modules 106 executable by the one or more processors 108.
  • computer-readable instructions 107 executable by a processor may comprise an operation system, driver programs, and/or application programs.
  • computer-readable instructions 107 may comprise voice-recognition software and/or AI software and/or an application programming interface (API) or conduit to voice recognition software and/or AI software executable on another computing device and/or server.
  • the computer-readable instructions 107 executable by one or more processors 108 of the mobile communications device 105 may be executed on a mobile communications device 105, an intelligent umbrella / robotic shading system 110, a combination of the intelligent umbrella/robotic shading system, a mobile communications device 105 and/or a third party server 120, or mainly a third party server 120.
  • sounds and/or voices may be captured by one or more microphones 109 in a mobile communications device and computer-readable instructions 107 executable by one or more processors 108 may initiate a voice recognition and/or AI software application / process.
  • an intelligent shading system and/or robotic shading system 110 may comprise a shading element and/or fabric (or an expansion assembly) 117, a support or core assembly 115, and/or a base assembly 116.
  • an intelligent umbrella and/or shading system 110 may comprise one or more processors 113, one or more microphones 114, and/or one or more memory modules 111.
  • computer-readable instructions 112 may be stored in one or more memory modules 111 and may be executable by one or more processors 113.
  • computer-readable instructions 112 may be an umbrella and/or single board computer operating or microcontroller instructions, application programs, mechanical and/or electrical assembly driver, or interface programs.
  • computer-readable instructions 112 may comprise voice-recognition software and/or AI software and/or an application programming interface (API) (or conduit) to voice recognition software and/or AI software executable on another external computing device and/or server.
  • API application programming interface
  • the computer-readable instructions 112 executable by one or more processors 113 may be resident on an intelligent umbrella / robotic shading system and voice-recognition and/or AI may be performed on the intelligent umbrella 110, a combination of the intelligent umbrella/robotic shading system and/or a third party or external server 120, or mainly a third party or external server 120.
  • a mobile communications device 105 may comprise one or more processors 108, one or more memory modules 106, one or more microphones 109, and/or computer-readable instructions 107.
  • the computer-readable instructions 107 may be executable by the one or more processors 108 to provide noise cancellation, background noise cancellation and/or noise adjustment for users and/or operators of a mobile communications device 105 communicating commands, instructions and/or messages to an intelligent umbrella and/or robotic shading systems 110.
  • sounds and/or voices may be captured by one or more microphones 114 in an intelligent umbrella / robotic shading system 110 and computer-readable instructions 112 executable by one or more processors 113 may initiate a voice recognition and/or AI software application and/or process.
  • a third party or external server 120 may comprise one or more processors 123 and/or one or more memory modules 121.
  • computer-readable instructions 122 may be stored in one or more memory modules 121 and executable by the one or more processors 123 to perform voice recognition and/or artificial intelligence on voice commands and/or corresponding audio files that were captured by the mobile communications device 105 and/or the intelligent umbrella 110 and communicated from the mobile communications device 105 and/or the intelligent umbrella 110.
  • background noise and/or ambient noise may be present in an environment where intelligent umbrellas and/or robotic shading systems are installed and present and/or are located in a vicinity thereof.
  • voice recognition and/or artificial intelligence becomes more difficult because noise (e.g., ambient or background noise) interferes with accurate capture and/or recognition of voice commands from users and operators.
  • background noise and/or ambient noise may be present due to weather conditions (e.g., lightning 133, rain 132, and/or wind).
  • background noise and/or ambient noise may be present due to transportation noise near an environment where an intelligent umbrella and/or robotic shading system is located or present.
  • transportation noise may be caused and/or generated from a drone 131, aircraft 135, cars, trucks and/or highway noise.
  • background noise and/or ambient noise may be caused or generated by electrical equipment and/or mechanical equipment.
  • equipment generated noise may be generated and/or caused by robots 136, air conditioners 134, sprinkler systems, lawnmowers and/or neighbor stereo systems. Accordingly, in order for voice recognition and/or artificial intelligence software applications (e.g., computer-readable instructions) to operate in an accurate and/or efficient fashion, ambient or background noise may need to be reduced, lowered and/or cancelled by software, hardware and/or a combination thereof.
  • voice commands (or audible commands) received by a mobile communications device 105 and/or intelligent shading device 110 may be more efficiently, effectively and/or accurately processed after a noise reduction and/or cancellation process is performed either in a mobile communications device 105, an intelligent umbrella and/or shading system 110, and/or a third party or external server 120.
  • Outdoor ambient and background noise are normally not issues inside a structure (e.g., house, hotel or office) because of walls, windows, shades and/or other indoor structures that provide noise protection from weather conditions, transportation noise and/or outdoor equipment noise.
  • a user and/or operator may utilize a mobile
  • a communications device 105 to communicate voice commands, instructions and/or messages to an intelligent umbrella and/or robotic shading system 110.
  • a user and/or operator may speak into a microphone 109 on a mobile communications device 105
  • a mobile communications device 105 may capture spoken commands (e.g., audible commands) and may convert the spoken commands into one or more audio files (or audio command files).
  • voice recognition may be performed on the received one or more audio files at either the mobile device 105, the intelligent umbrella and/or robotic shading system 110, and/or a third party voice recognition or AI server 120 in order to identify commands directed to an intelligent umbrella and/or robotic shading system.
  • these commands may be referred to as umbrella commands and/or robotic shading commands.
  • intelligent umbrellas and/or robotic shading systems 110 may be located in outdoor environments and thus noise and/or other sounds may interfere with the umbrella and/or robotic shading voice commands being recognized and/or understood correctly by voice recognition and/or artificial intelligence software (whether executed on a mobile communications device 105, an intelligent umbrella / robotic shading system 110 or voice recognition server 120).
  • the noise may be ambient and/or background noise that is present either periodically, at most times, or sporadically.
  • generation and presence of background and/or ambient noise may be repetitive and predictive due to certain conditions being present under certain conditions, (e.g., at different timeframes, at different days of the week during specific environmental conditions).
  • an intelligent umbrella and/or robotic shading system 110 may be located near an airport, a noise-generating power plant, railroad tracks and/or near a freeway, and thus ambient and/or background noise may be present during operating hours of the airport, power plant, railroad and/or freeway.
  • noise patterns may be similar at these different times of day and/or under specific conditions. For example, when a sprinkler and/or an air conditioner is on, certain noise may regularly be present. Freeways may have similar noise patterns from 6– 10 AM in the morning and 3 to 7:30 PM in the evening when heavy traffic flows are present. As discussed, this background and/or ambient noise may cause errors in recognition of umbrella and/or robotic shading commands.
  • a method and/or process described herein may filter, reduce and/or eliminate background noise in order to improve accuracy of recognition of umbrella and/or robotic shading commands.
  • Figure 2 illustrates a flowchart of an intelligent umbrella noise reduction or cancellation process utilizing a mobile communications device and/or an intelligent umbrella or robotic shading system according to embodiments.
  • a mobile communications device 105 e.g., computer-readable instructions executable by one or more processors
  • an intelligent umbrella / automatic shading system 110 may activate one or more microphones.
  • computer-readable instructions executable by one or more processors of a mobile communications device 105 may capture 210 an audio file for an environment surrounding an intelligent umbrella and/or robotic shading system 110 (or in a vicinity of an umbrella / robotic shading system).
  • one or more audio files may be captured for a specified period of time (e.g., 1 minute, 2 minutes or 4 minutes) at a specific time of a day (morning, afternoon and evening or 6:00 am, 1:00 pm or 9:00 pm).
  • times of capture (or days of capture) may be preset and/or pre-established times or may be initiated by a user or operator.
  • computer-readable instructions executable by one or more processors of a mobile computing device may also capture 215 a time of day, a geographic location (e.g., from a GPS system), environmental sensor
  • computer-readable instructions may store 220 a captured audio file and/or time of day, geographic location, or environmental sensor measurements in one or more memory modules (e.g., a memory and/or database of a mobile computing device, an intelligent umbrella and/or robotic shading system and/or a remote server) for later utilization in a noise reduction and/or cancellation process.
  • memory modules e.g., a memory and/or database of a mobile computing device, an intelligent umbrella and/or robotic shading system and/or a remote server
  • a process described above may occur automatically at certain times of a day or night without user and/or operator intervention.
  • a noise reduction and/or cancellation capture process may occur: 1) via user input, e.g., selecting an icon to initiate computer-readable instructions to capture ambient noise and background noise audio for a noise cancellation and/or reduction capture process or 2) a user or operator speaking a noise cancellation or reduction initiating (or capture) command and a mobile communications device (and/or an intelligent umbrella / automatic shading system) recognizing and/or responding to such command.
  • a noise reduction or cancellation capture process may be initiated 230 more than one time a day.
  • environmental conditions e.g., different weather patterns (e.g., wind, temperature, etc.) that may be present at different times of a day
  • a noise cancellation or reduction capture process may occur once in the morning (e.g., 9:00 am), one in the afternoon (e.g., 2:00 pm) and once in the evening.
  • a number of times the noise cancellation or reduction capture process may occur or be initiated to capture background and/or ambient noise may be dependent on intelligent umbrella / robotic shading system usage patterns.
  • a noise cancellation and/or reduction capture process may be initiated and ambient and/or background noise may be captured around 2 pm; 5:30 pm and around 8 pm) in order to capture and/or monitor background and/or ambient noise during these timeframes.
  • noise conditions are relatively constant from 9:00 am to 3:00 pm and this is when a user and/or operator may utilize an intelligent umbrella / robotic shading system
  • a noise cancellation and/or reduction capture process may be initiated once around 9:00 am and background and/or ambient noise may be captured and be representative of an entire timeframe.
  • ambient and/or background noise is captured as an audio file and stored in one or more memory modules and/or databases
  • the captured noise may be utilized as baseline noise for mobile communication device and/or intelligent umbrella and/or robotic shading system voice recognition software and/or artificial intelligence software.
  • one or more baseline ambient and/or background noise audio files may be stored in its entirety.
  • noise files captured in steps 205 through 230, described above may be stored as noise audio files in memory modules of mobile communications device, intelligent umbrella/shading system and/or external servers.
  • one or more noise audio files may be sampled in order to generate a plurality of samples of ambient and/or background noise audio files and the plurality of noise samples may be stored in one or more memory modules of a mobile communications device, intelligent umbrella/shading system and/or external servers.
  • corresponding frequencies and/or times may be stored in one or more memory modules of mobile communications device 105, intelligent umbrella/shading system 110 and/or external servers 120 and may be utilized for noise reduction and/or cancellation purposes.
  • other corresponding measurements may also be stored in one or more memory modules of mobile communications device, intelligent umbrella/shading system and/or external servers, (such as time of audio capture, day of audio capture, one or more sensor readings when audio or audible commands captured, other environmental conditions, etc.), which may be utilized to match or closely correspond to existing conditions when noise cancellation and/or reduction is performed.
  • computer-readable instructions executable by one or more processors of mobile communications device, intelligent umbrella/shading system and/or external servers may match or attempt to match (e.g., find a closest condition) existing conditions or time frames and retrieve corresponding ambient and background noise audio files and/or samples from the one or more memory modules.
  • recently captured ambient and/or background noise audio files may be added to existing ambient and/or background noise audio files in order to compile a history of ambient and/or background noise audio files.
  • recently captured ambient and/or background noise files may correspond to a specific time of day, day of the week, and/or other environmental conditions and may be stored with like ambient and/or background noise audio files (e.g., audio files and/or measurements captured at a same time of day and/or same day of week).
  • database records may include ambient and/or background noise files, corresponding time of days, corresponding days of weeks, and/or measurements (e.g., sensor or umbrella measurements).
  • database records may include a field or flag identifying other like measurements (e.g., time of day, day or week).
  • a noise reduction and/or cancellation process may utilize an average of a last few (e.g., three or five) captured ambient and/or background noise audio files as a baseline.
  • a noise reduction and/or cancellation process may average all or nearly all of stored captured ambient and/or background noise audio files as a baseline.
  • computer-readable instructions executable by one or more processors may also spot and/or identify faulty and/or out-of-tolerance noise readings. In embodiments, this may signal problems with and/or malfunctions of components within a mobile communications device and/or intelligent umbrella / robotic shading system (e.g., such as failure of a microphone and/or components to capture audio files).
  • computer-readable instructions executable by one or more processors may generate an error message and/or may request that a mobile communications device and/or intelligent umbrella 1) capture a subsequent ambient and background noise audio file and determine if the subsequent ambient and/or background noise audio file is in tolerance levels or a range of previously collected ambient and/or background noise audio files; and/or 2) perform a diagnostic test on one or more microphones and/or associated circuitry within an intelligent umbrella and/or mobile communications device.
  • computer-readable instructions executable by one or more processors may also request a recalibration process be completed for one or more microphones and/or other associated circuitry within an intelligent umbrella/robotic shading system.
  • a user or operator may speak a voice command into a mobile phone (and/or an intelligent umbrella / automatic shading system) microphone to instruct and/or request an intelligent umbrella/robotic shading to perform actions related to the command (e.g., such as rotate 30 degrees about a vertical and/or azimuth access).
  • a voice command into a mobile phone (and/or an intelligent umbrella / automatic shading system) microphone to instruct and/or request an intelligent umbrella/robotic shading to perform actions related to the command (e.g., such as rotate 30 degrees about a vertical and/or azimuth access).
  • voice commands may be established for activating intelligent umbrella / robotic shading systems lighting systems, image capture devices, computing devices, audio-video receiver and speakers, sensors, azimuth motors, elevation motors, expansion motors, base assembly motors, solar panels, and/or a variety of software applications resident within the intelligent umbrella / robotic shading system and/or third party server (health software applications, point of service software applications, Internet of Thing software applications, energy calculation software applications, video and/or audio storage software applications, etc.).
  • computer-readable instructions executed by a processor may capture the voice command or audible command 250, via one or more microphones, and generate an audio command file 255.
  • computer-readable instructions in a noise reduction process executable by one or more processors may determine and/or calculate 260 a time of day (and/or time of week and/or other environmental conditions) and retrieve 265 corresponding a baseline ambient or background noise audio file (or a sample file) from the one or more memory modules, and/or other corresponding and similarly captured information.
  • one or more captured audio command files may be noise reduced and/or filtered 270 with respect to a corresponding ambient and/or background noise file (e.g., which may have been retrieved from a memory module).
  • one or more ambient or background noise files may be subtracted from one or more audio command files.
  • computer-readable instructions executed by one or more processors may utilize 270 a corresponding ambient and/or background noise file to reduce, cancel and/or eliminate ambient and/or background noise from the captured audio command file.
  • computer-readable instructions executable by one or more processors may generate 275 one or more noise-reduced audio command files that may be utilized in by voice-recognition and/or artificial intelligence applications with a higher degree of accuracy.
  • one or more noise-reduced audio command files may have ambient and/or background noise components (such as wind-generated noise, railroad train noise, freeway or traffic noise, air conditioner noise, and/or neighbor-generated music noise) eliminated, cancelled and/or reduced which allows voice-recognition and/or artificial intelligence software applications to provide more accurate results and generate less errors.
  • computer- readable instructions executed by one or more processors may communicate 280 one or more noise-reduced audio command files as input to a voice-recognition and/or artificial intelligent software application.
  • third-party voice recognition and/or artificial intelligence servers may include Amazon Echo software and servers, Google Now software and servers, Apple Siri software and servers, Microsoft Cortana software and servers, Teneo software and servers or Viv’s AI software and servers.
  • Location of noise reduction, elimination and/or cancellation software applications may be dependent on: 1) device and/or server hardware constraints (e.g., processor/controller power; memory; and/or storage (hard drives, solid-state drives, flash memory); 2) communication network constraints (e.g., what wireless and/or wired communication networks and/or transceivers are present in location and whether communication networks have enough bandwidth to handle voice recognition and/or AI applications); and/or 3) user constraints (e.g., does a user and/or operator have a mobile communications device and/or did user purchase an intelligent umbrella/robotic shading system with voice recognition and/or AI functionality).
  • device and/or server hardware constraints e.g., processor/controller power; memory; and/or storage (hard drives, solid-state drives, flash memory
  • communication network constraints e.g., what wireless and/or wired communication networks and/or transceivers are present in location and whether communication networks have enough bandwidth to handle voice recognition and/or AI applications
  • user constraints e.g., does a user and/or
  • Figures 3A and 3B illustrates impact of a noise reduction or cancellation process on voice command audio files for intelligent umbrellas and/or robotic shading systems according to embodiments.
  • Figure 3A illustrates amplitude 305 of one or more command audio files for specified timeframes as well as amplitude 310 of one or more captured background and/or ambient noise files for specified timeframes.
  • audio files may comprise amplitudes of command audio files at different wavelengths and/or frequencies and corresponding amplitudes of background and/or ambient noise at different wavelengths and/or frequencies may be reduced and/or cancelled during a noise reduction and/or cancellation process.
  • Figure 3B illustrates an impact of noise reduction or cancellation process on voice command audio files (e.g., amplitudes of voice command audio files).
  • Figure 3B illustrates an amplitude 320 of a noise-reduced command audio file for specified times according to embodiments. As illustrated by Figure and 3B, amplitude may be reduced for specified timeframes. In addition, it may be that there is a break in a speaker’s voice but during this timeframe there was ambient and/or background noise. In this example, a noise-reduction and/or cancellation process may remove and/or eliminate such a noise signal component from a captured audio file because there was no corresponding spoken command during a timeframe.
  • a noise reduction or cancellation process may reduce and/or eliminate noise created by a sprinkler system and be able to provide noise-reduced audio command files that are better analyzed by voice recognition and/or AI software applications.
  • an intelligent umbrella / robotic shading system may be moved from one location to another.
  • an intelligent umbrella / robotic shading system may be moved from a location where background noise is mainly air conditioners, lawn mowers and/or sprinklers and may move to a different location where background and ambient noise is generated by railroad tracks and freeway noise.
  • a user and/or operator may recapture background and/or ambient noise for a new embodiment where the intelligent umbrella / robotic shading system has been moved to and re-located.
  • steps 205– 230 of Figure 2 may be repeated for the new environment and/or new location.
  • certain background and/or ambient noises may be stored in a memory modules and/or databases of 1) mobile communication devices, 2) intelligent umbrellas / robotic shading systems and/or 3) voice recognition and/or AI servers may be retrieved automatically and/or by user input.
  • a user and/or operator may not know exact timeframes when ambient and/or background noises may be generated but may know that these noises will likely be present in an environment where an intelligent umbrella and/or robotic shading system may be located. In these circumstances, a user and/or operator may capture such ambient and/or background noises as they occur and store them along with related measurements (time, day and/or conditions) in a database and/or memory modules.
  • ambient and/or background noise audio files may be captured for lawnmower noise, aircraft noise, drone noise, highway noise and/or sprinkler noise.
  • a user or operator may provide a name or type identification for the one or more noise files. Because these ambient and/or background noise files are now stored in memory modules and/or databases of intelligent umbrellas / robotic shading systems, mobile communication devices and/or third party servers, the files may be retrieved when such conditions occur in the environment where the intelligent umbrella and robotic shading system is installed.
  • computer-readable instructions executed by one or more processors may generate an input screen where a user and/or operator can select specific noises which are present in a current environment and thus may need to be cancelled and/or reduced when a user or operator speaks a voice command directed to an action to be performed by an intelligent umbrella and/or robotic shading system.
  • a user could select that sprinkler and/or airplane noise files may need to be retrieved and applied during a noise reduction process to generate noise-reduced audio files.
  • these pre-established noise files may also have an associated name
  • a user and/or operator before speaking a voice command
  • a user may speak“sprinkler” and/or“aircraft” to retrieve corresponding ambient and/or background noise files and then speak“activate lighting assembly and cameras.”
  • sprinkler and/or aircraft ambient and/or background noise may be removed from the received“activate lighting assembly and cameras” captured audio file.
  • intelligent umbrellas and/or robotic shading system such as intelligent umbrellas, modular umbrella shading systems and shading systems as described in U.S. patent application serial No.15/394,080, filed
  • a noise reduction, elimination and/or cancellation process may be implemented and/or utilized with respect to Artificial Intelligence devices with Shading Systems as described in U.S. patent application serial No.15/418,380, filed January 27, 2016 and entitled“Shading System with Artificial Intelligence Application Programming Interface.”
  • a noise reduction, elimination and/or cancellation process may be implemented in and/or utilized with respect to marine vessel intelligent umbrellas and/or shading systems, as described in U.S. patent application serial No.15/436,739, filed February 17, 2017 and entitled“Marine Vessel with Intelligent Shading Systems.”
  • FIG. 4 illustrates a microphone and/or LED array in an AI device housing according to embodiments.
  • a microphone and/or LED array 400 may comprise a plastic housing 405, one or more flexible printed circuit boards (PCBs) or circuit assemblies 410, one or more LEDs or LED arrays 415 and/or one or more microphones and/or microphone arrays 420.
  • a plastic housing 405 may be oval or circular in shape.
  • a plastic housing 405 may be fitted around a shaft, a post and/or tube of for example, a support assembly, 115 in an intelligent umbrella and/or robotic shading system 110.
  • a plastic housing 405 may be adhered to, connected to and/or fastened to a shaft, a post and/or tube.
  • a flexible PCB or housing 410 may be utilized to mount and/or connect electrical components and/or assemblies such as LEDs 415 and/or microphones 420.
  • a flexible PCB or housing 410 may be mounted, adhered or connected to a plastic housing or ring 405.
  • a flexible PCB or housing 410 may be mounted, adhered or connected to an outer surface of a plastic housing or ring 405.
  • a plastic housing or ring 405 may have one or more waterproof openings 425 for venting heat from one or more microphone arrays 420 and/or one or more LED arrays 415.
  • a plastic housing or ring 405 may have one or more waterproof openings for keeping water away and/or protecting one or more microphone arrays 420 and/or one or more LED arrays 415 from moisture and/or water.
  • one or LED arrays 415 may be mounted and/or connected on an outer surface of a flexible PCB strip 410 and may be positioned at various locations on the flexible PCB 410 to provide lighting in areas surrounding a shading and AI system.
  • one or more LED arrays may be spaced at uniform distances around a plastic housing 405 (e.g., or ring housing).
  • one or more microphones or microphone arrays 420 may be mounted and/or connected to a flexible PCB strip 410.
  • one or more microphones or microphone arrays 420 may be positioned at one or more locations around a housing or ring 405 to be able capture audible sound and/or voice commands coming from a variety of directions. In embodiments, one or more microphones or microphone arrays 420 may be spaced at set and/or uniform distances around a housing and/or ring 405.
  • FIG. 5A illustrates a shading system including an artificial intelligence engine and/or artificial intelligence interface.
  • a shading system including artificial intelligence (AI) 500 include a shading element or shade (or an expansion assembly/arm expansion assembly) 503, a shading support 505 and a shading device housing 508.
  • a shading element or shade (or an expansion assembly/arm expansion assembly) 503 may provide shade to keep a shading device housing 508 from overheating.
  • a shading element or shade 503 (or an expansion assembly/arm expansion assembly) may include a shading fabric.
  • a shading device housing 508 may be coupled and/or connected to a shading support 505.
  • a shading support 505 may be coupled to a shading device housing 508.
  • a shading support 505 may support a shade or shading element 503 (or an expansion assembly/arm expansion assembly) and move it into position with respect to a shading device housing 508.
  • a shading device housing 508 may be utilized as a base, mount and/or support for a shading element or shade 503.
  • a shading support may be simplified and may not have a tilting assembly (as in Figure 6 described below where an upper housing of a core module assembly 630C is rotated about (or moved about) a lower housing of a core module assembly 630C).
  • a shading support may be simplified and not have a core assembly.
  • a shading support 505 may also not include an expansion and sensor assembly (as is shown in Figure 6).
  • a shading support / support assembly 505 may not comprise an integrated computing device and/or may not have sensors.
  • a shading element or shade (or an expansion assembly/arm expansion assembly) 503 or a shade support/support assembly 505 may comprise one or more sensors (e.g., environmental sensors).
  • sensors may be a temperature sensor, a wind sensor, a humidity sensor, an air quality sensor, and/or an ultraviolet radiation sensor.
  • a shading support may not include an audio system (e.g., a speaker and/or an audio/video transceiver) and may not include lighting assemblies.
  • a shading housing 508 may not include one or more lighting assemblies.
  • a shading device housing 508 may comprise a computing device 520.
  • a shading device housing 508 may comprise one or more processors/controllers 527, one or more memory modules 528, one or more microphones (or audio receiving devices) 529, one or more PAN transceivers 530 (e.g., Bluetooth transceivers), one or more wireless transceivers 531 (e.g., WiFi or other 802.11 transceivers), and/or one or more cellular transceivers 532 (e.g., EDGE transceiver, 4G, 3G, CDMA and/or GSM transceivers).
  • PAN transceivers 530 e.g., Bluetooth transceivers
  • wireless transceivers 531 e.g., WiFi or other 802.11 transceivers
  • cellular transceivers 532 e.g., EDGE transceiver, 4G, 3G, CDMA and/or GSM transceivers.
  • the processors, memory, transceivers and/or microphones may be integrated into a computing device 520, where in other embodiments, a single-board computing device may not be utilized.
  • one or more memory modules 528 may contain computer-readable instructions, the computer-readable instructions being executed by one or more processors/controllers 527 to perform certain functionality.
  • the computer-readable instructions stored in one or more memory modules 528 may comprise an artificial intelligence API 540.
  • computer-readable instructions stored in one or more memory modules 528 may comprise a noise cancellation and/or reduction software application 541and/or application programming interface 541.
  • noise cancellation and/or reduction computer-readable instructions 541 may be executed by processors/controllers 527 in a shading device housing 508.
  • an artificial intelligence API 540 may allow communications between a shading device housing 508 and a third party artificial intelligence engine housed in a local and/or remote server and/or computing device 550.
  • a noise cancellation and/or reduction API 541 may allow communications between a shading device housing 508 and a third party noise cancellation engine housed in a local and/or remote server and/or computing device 550.
  • an AI API 540 may be a voice recognition AI API, which may be able to communicate sound files (e.g., analog or digital sound files) to a third party voice recognition AI server (e.g., server 550).
  • a voice recognition and/or AI server may be an Amazon Alexa, Echo, Echo Dot and/or a Google Now server, which each include AI computer-readable instructions executable by one or more processors.
  • a shading device housing 508 may comprise one or more
  • microphones 529 to capture audio (and specifically) audible and/or voice commands spoken by users and/or operators of shading systems 500.
  • one or more microphones may also be present and/or installed in a mobile communications device 510 to capture audio, and or audible and voice commands spoken by users and/or operators of shading systems. The process of a mobile communications device 510 capturing spoken audible audio and/or voice commands is described above in the discussion of Figure 1.
  • computer-readable instructions executed by one or more processors 527 may receive captured sounds and create analog and/or digital audio files corresponding to spoken audio commands (e.g., voice commands such as“open shading system,”“rotate shading system,”“elevate shading system,”“select music to play on shading system,”“turn one lighting assemblies,” etc.).
  • a noise reduction or cancellation API 541 may communicate audio files to an external AI and/or voice recognition server 550, which may perform noise reduction or cancellation on the communicated audio files (e.g., corresponding to the spoken commands).
  • noise reduction or cancellation computer-readable instructions 541 executable by one or more processors may cancel and/or reduce noise from the received audio files, and thus may generate noise-reduced audio files
  • a shading device housing 508 may communicate generated audio files to external AI servers 550 via or utilizing one or more PAN transceivers 530, one or more wireless local rea network transceivers 531, and/or one or more cellular transceivers 532.
  • communications with an external AI server 550 may occur utilizing PAN transceivers 530 (and protocols).
  • communications with an external AI server 550 may occur utilizing a local area network (802.11 or WiFi) transceiver 531.
  • communications with an external AI server 550 may occur utilizing a cellular transceiver 532 (e.g., utilizing 3G and/or 4G or other cellular communication protocols).
  • a shading device housing 508 may utilize more than one microphones 529 to allow capture of voice commands from a number of locations and/or orientations with respect to a shading system 500 (e.g., in front of, behind a shading system, and/or at a 45 degree angle with respect to a support assembly 505).
  • FIG. 5B illustrates a block and dataflow diagram of communications between a shading system and/or one or more external AI servers according to embodiments.
  • a shading system 570 may communicate with an external AI server 575 and/or additional content servers 580 via wireless and/or wired communications networks.
  • a user may speak 591 a command (e.g., turn on lights, or rotate shading system) which is captured as an audio file and received.
  • an AI API 540 may communicate and/or transfer 592 an audio file (utilizing a transceiver– PAN, WiFi/802.11, or cellular) to an external or third-party AI server 575.
  • an external or third-party AI server 575 may comprise a noise reduction or cancellation engine or module 584.
  • an external AI server 575 may comprise a voice recognition engine or module 585, a command engine module 586, a third party content interface 587 and/or third party content formatter 588.
  • an external AI server 575 may receive 592 one or more audio files
  • a noise reduction or cancellation engine or module 584 may receive the communicated audio file, reduce ambient and/or background noise in the communicated audio file and generate one or more noise-reduced audio files.
  • a voice recognition engine or module 585 may convert one or more noise-reduced audio files to a device command (e.g., shading system commands, computing device commands) and communicate 593 device commands to a command engine module or engine 586.
  • a command engine or module 586 may communicate and/or transfer 594 a generated command, message, and/or instruction to a shading system 500.
  • a shading system 500 may receive the communicated command, communicate and/or transfer 595 the communicated command to a controller/processor 571.
  • the controller/processor 571 may generate 596 a command, message, signal and/or instruction to cause an assembly, component, system or devices 572 to perform an action requested in the original voice command (open or close shade element, turn on camera, activate solar panels).
  • a user may request actions to be performed utilizing a shading system’s microphones and/or transceivers that may require interfacing with third party content servers (e.g., NEST, e-commerce site selling sun care products, e-commerce site selling parts of umbrellas or shading systems, communicating with online digital music stores (e.g., iTunes), home security servers, weather servers and/or traffic servers).
  • third party content servers e.g., NEST, e-commerce site selling sun care products, e-commerce site selling parts of umbrellas or shading systems, communicating with online digital music stores (e.g., iTunes), home security servers, weather servers and/or traffic servers.
  • a shading system user may request 1) traffic conditions from a third party traffic server; 2) playing of a playlist from a user’s digital music store accounts; 3) ordering a replacement skin and/or spokes/blades arms for a shading system.
  • additional elements and steps may be added to previously described method and/or process.
  • a user may speak 591 a command or desired action (execute playlist, order replacement spokes/blades, and/or obtain traffic conditions from a traffic server) which is captured as an audio file and received at an AI API 540 stored in one or more memories of a shading system housing 570.
  • an AI API 540 may communicate and/or transfer 592 an audio file utilizing a shading system’s transceiver to an external AI server 575.
  • an external AI server 575 may receive one or more audio files and a noise reduction or cancellation engine or module 585 may receive the communicated one or more audio files, may reduce and/or cancel ambient and/or background noise from the received audio files, and may generate one or more noise-reduced audio files.
  • a voice recognition engine or module 585 may convert 593 the received one or more noise-reduced audio files to a query request (e.g., traffic condition request, e-commerce order, and/or retrieve and stream digital music playlist).
  • an external AI server may communicate and/or transfer 597 a query request to a third party server (e.g., traffic conditions server (e.g., SIGALERT or Maze), an e-commerce server (e.g., a RITE-AID or SHADECRAFT SERVER, or Apple iTunes SERVER) to obtain third party goods and/or services.
  • a third party content server 580 (a communication and query engine or module 581) may retrieve 598 services from a database 582.
  • a third party content server 580 may communicate services queried by the user (e.g., traffic conditions or digital music files to be streamed) 599 to an external AI server 575.
  • a third party content server 580 may order requested goods for a user and then retrieve and communicate 599 a transaction status to an external AI server 575.
  • a content communication module 587 may receive communicated services (e.g., traffic conditions or streamed digital music files) or transaction status updates (e.g., e-commerce receipts) and may communicate 701 the requested services (e.g., traffic conditions or streamed digital music files) or the transaction status updates to a shading system 570.
  • Traffic services may be converted to an audio signal, and an audio signal may be reproduced utilizing an audio system 583.
  • digital music files may be communicated and/or streamed 702 directed to an audio system 583 because there is no conversion necessary.
  • E-commerce receipts may be converted and communicated to speaker 583 for reading aloud.
  • E- commerce receipts may also be transferred to computing device in a shading system 570 for storage and utilization later.
  • computer-readable instructions in a memory module of a shading system may be executed by a processor and may comprise a voice recognition module or engine and/or a noise reduction and/or cancellation module or engine 541.
  • noise reduction and/or cancellation and/or voice recognition may be performed at an intelligent shading system 500 without utilizing a cloud-based server.
  • a mobile communications device (510 in Figure 1) may include computer-readable instructions executable by one or more processors to perform noise reduction or cancellation and/or voice recognition on received audio files captured from microphones.
  • a shading system 570 may receive 703 the communicated command, communicate and/or transfer 704 the communicated command to a controller/processor 571.
  • the controller/processor 571 may generate and/or communicate 596 a command, message, signal and/or instruction to cause an assembly, component, system or device 572 to perform an action requested in the original voice command
  • a mobile computing device 510 may communicate with a shading system with an artificial intelligence capabilities.
  • a user may communicate with a mobile computing or communications device 510 by a spoken command into a microphone.
  • a mobile computing or communications device 510 may communicate a digital or analog audio file to a processor 527 and/or AI API 540 in a shading device housing.
  • a mobile computing or communications device 510 may also convert the audio file into a textual file for easier conversion from an external or integrated AI server or computing device 550.
  • Figures 5A and 5B describe a shading system having a shading element or shade, shading support and/or shading housing.
  • a shading housing such as the one described above may be attached to any shading system and may provide artificial intelligence functionality and services.
  • a shading system may be an autonomous and/or automated shading system having an integrated computing device, sensors and other components and/or assemblies, and may have artificial intelligence functionality and services provided utilizing an AI API stored in a memory of a shading housing.
  • Figure 6 illustrates an intelligent shading system comprising a shading housing wherein a shading housing comprises an AI or a noise reduction API according to embodiments.
  • a shading system 600 comprises an expansion module 660, a core module 630C and a shading housing 610.
  • an expansion module 660 may comprise one or more spoke support assemblies 663, one or more detachable arms/spokes 664, one or more solar panels and/or fabric 665, one or more LED lighting assemblies 666 and/or one or more speakers 667.
  • an expansion module 660 may be coupled and/or connected to a core assembly module 630C.
  • a coupling and/or connection may be made via a universal connection.
  • a core module assembly 630C may comprise an upper assembly 640, a sealed connection 641 and/or a lower assembly 642.
  • a core module assembly 630C may comprise one or more rechargeable batteries 6352, a motion control board 634, an expansion motor 6351 and/or an integrated computing device 636.
  • a core module assembly 630C may comprise one or more transceivers (e.g., a PAN transceiver 630, a WiFi transceiver 631 and/or a cellular transceiver 632).
  • a core module assembly 630C may be coupled and/or connected to a shading housing 610.
  • a universal connector may be a connector and/or coupler between a core module assembly 630C and a shading housing 610.
  • a shading housing 610 may comprise a shading system connector 613, one or more memory modules 615, one or more
  • processors/controllers 625 one or more microphones 633, one or more transceivers (e.g., a PAN transceiver 630, a wireless local area network (e.g., WiFi) transceiver 631, and/or a cellular transceiver 632), and an artificial intelligence (“AI”) Application programming interface (“API”) or noise reduction or noise cancellation API 620.
  • one or more microphones 633 receives a spoken command and captures/converts the command into a digital and/or analog audio file.
  • one or more processors/controllers 625 interacts and executes AI and/or nose reduction or cancellation API 620 instructions (stored in one or more memory modules 615) and communicates and/or transfers audio files to a third party AI server (e.g., an external AI server or computing device for the external AI or third party server to perform noise reduction or cancellation, voice recognition or AI features).
  • a third party AI server e.g., an external AI server or computing device for the external AI or third party server to perform noise reduction or cancellation, voice recognition or AI features.
  • an AI API 620 may communicate and/or transfer audio files via and/or utilizing a PAN transceiver 630, a local area network (e.g., WiFi) transceiver 631, and/or a cellular transceiver 632.
  • an AI API 620 may receive communications, data, measurements, commands, instructions and/or files from an external AI or third-party server or computing device (as described in Figures 5A, 5B or 6) and perform and/or execute actions in responses to these communications.
  • an intelligent umbrella or shading system 600 may include computer-readable instructions stored in one or more memory modules 615 executable on one or more processors 625 may perform noise reduction and/or cancellation on received audio files as well as performing voice recognition or received audio files or noise-reduced audio files and may communicate the voice-recognized commands and/or the noise-reduced audio files through an AI API 620 to a third-party and/or external AI server.
  • memories and processors are shown in a shading housing 610, in addition, computer-readable instructions, one or more memory modules and/or one or more processors may also be located in a core module 630C and/or an expansion shading assembly or module 660.
  • a shading system and/or umbrella may communicate via one or more transceivers. This provides a shading system with an ability to communicate with external computing devices, servers and/or mobile
  • a shading system with a plurality of transceivers may communicate when one or more communication networks are down, experiencing technical difficulties, inoperable and/or not available.
  • a WiFi wireless router may be malfunctioning and a shading system with a plurality of transceivers may be able to communicate with external devices via a PAN transceiver 630 and/or a cellular transceiver 632.
  • a shading system with one or more transceivers may communicate with external computing devices via the operating transceivers. Since most shading systems may not have any communication transceivers, the shading systems described herein is an improvement over existing shading systems that have no communication capabilities and/or limited communication capabilities.
  • a base assembly or module may also a base motor controller PCB, a base motor, a drive assembly and/or wheels.
  • a base assembly may move to track movement of the sun, wind conditions, and/or an individual’s commands.
  • a shading object movement control PCB may send commands, instructions, and/or signals to a base assembly identifying desired movements of a base assembly.
  • a shading computing device system including a SMARTSHADE and/or SHADECRAFT application
  • a desktop computer application may transmit commands, instructions, and/or signals to a base assembly identifying desired movements of a base assembly.
  • a base motor controller PCB may receive commands, instructions, and/or signals and may communicate commands and/or signals to a base motor.
  • a base motor may receive commands and/or signals, which may result in rotation of a motor shaft.
  • a motor shaft may be connected, coupled, or indirectly coupled (through gearing assemblies or other similar assemblies) to one or more drive assemblies.
  • a drive assembly may be one or more axles, where one or more axles may be connected to wheels.
  • a base assembly may receive commands, instructions and/or signal to rotate in a counterclockwise direction approximately 15 degrees.
  • a motor output shaft would rotate one or more drive assemblies rotate a base assembly approximately 15 degrees.
  • a base assembly may comprise more than one motor and/or more than one drive assembly.
  • each of motors may be controlled independently from one another and may result in a wider range or movements and more complex movements.
  • a base assembly 110 and/or first extension assembly 120 may be comprised of stainless steel.
  • a base assembly 110 and/or first extension assembly 120 may be comprised of a plastic and/or a composite material, or a combination of materials listed above.
  • a base assembly 110 and/or first extension assembly 120 may be comprised and/or constructed by a biodegrable material.
  • a base assembly 110 and/or first extension assembly 120 may be tubular with a hollow inside except for shelves, ledges, and/or supporting assemblies.
  • a base assembly 110 and/or first extension assembly 120 may have a coated inside surface.
  • a base assembly 110 and/or first extension assembly 120 may have a circular circumference or a square circumference.
  • a core module assembly 630C may be comprised of stainless steel.
  • a core module assembly 630C may be comprised of a metal, plastic and/or a composite material, or a combination thereof.
  • a core module assembly 630C may be comprised of wood, steel, aluminum or fiberglass.
  • a shading object center support assembly may be a tubular structure, e.g., may have a circular or an oval circumference.
  • a core module assembly 630C may be a rectangular or triangular structure with a hollow interior.
  • a hollow interior of a core module assembly 630C may have a shelf or other structures for holding or attaching assemblies, PCBs, and/or electrical and/or mechanical components.
  • components, PCBs, and/or motors may be attached or connected to an interior wall of a shading object center assembly.
  • a plurality of spokes/arms/blades 664 and/or spoke/arm support assemblies 663 may be composed of materials such as plastics, plastic composites, fabric, metals, woods, composites, or any combination thereof.
  • spokes/arms/blades 664 and/or spoke/arm support assemblies 663 may be made of a flexible material.
  • spokes/arms/blades 664 and/or spokes/arm support assemblies 663 may be made of a stiffer material.
  • an intelligent umbrella comprising a shading expansion assembly, a support assembly, coupled to the shading expansion assembly, to provide support for the shading expansion assembly, a base assembly, coupled to the support assembly, to provide contact with a surface.
  • the intelligent umbrella also comprises one or more wireless communication
  • transceivers one or more microphones to capture audible commands, one or more memory modules and one or more processors.
  • computer-readable instructions stored in the one or more memory modules are executed by a processor to convert the captured audible commands into one or more audio files and perform noise reduction or noise cancellation on the one or more audio files to generate one or more noise-reduced audio files.
  • the computer-readable instructions stored in the one or more memory modules are further executed by the one or more processors to communicate the one or more noise-reduced audio files to an external computing device utilizing the one or more wireless communication transceivers.
  • the computer-readable instructions stored in the one or more memory modules are further executed by the one or more processors to perform voice recognition on the one or more noise-reduced audio files to generate audio command files.
  • the computer-readable instructions stored in the one or more memory modules are further executed by the one or more processors to generate commands, signals or messages and communicate the commands, signals or messages to assemblies of the intelligent umbrella to perform actions based at least in part on the captured audible commands.
  • performing noise reduction or noise cancellation on the one or more audio files to generate one or more noise-reduced audio files comprises reducing noise components of the one or more audio files captured by the one or more microphones by subtracting out components of previously stored noise audio files.
  • performing noise reduction or noise cancellation on the one or more audio files to generate noise-reduced audio files comprises sampling the one or more audio files captured by the one or more microphones to generate a plurality of command audio file samples and reducing the plurality of command audio file samples by subtracting associated noise file samples from the plurality from the plurality of command audio file samples to generate noise-reduced audio samples.
  • the computer-readable instructions stored in the one or more memory modules are executed by the one or more processors further comprise capturing a current time or a current day of the week, retrieving a noise file associated a time or day that is closest to matching the captured current time or current day, and performing noise reduction on the captured one or more audio files utilizing the retrieved noise file to generate the one or more noise-reduced audio files.
  • the computer-readable instructions stored in the one or more memory modules are executed by the one or more processors further to comprise parsing the one or more audio files into one or more noise command files and one or more umbrella command files, performing voice recognition on the one or more noise command files to determine names of corresponding noise files and performing noise reduction on the captured one or more audio files utilizing the retrieved noise files to generate the one or more noise-reduced audio files.
  • a mobile communications device includes one or more microphones, one or more processors, one or more memory modules, one or more wireless transceivers to communicate with an intelligent umbrella and computer-readable instructions stored in the one or more memory devices and executable by the one or more processors to communicate with an intelligent umbrella and computer-readable instructions stored in the one or more memory devices and executable by the one or more processors to communicate with an intelligent umbrella and computer-readable instructions stored in the one or more memory devices and executable by the one or more processors to
  • computer-readable instructions stored in the one or more memory modules and executable by the one or more processors store the first noise audio files and the second noise audio files in the one or more memory modules, as baseline noise audio files for the environment in the vicinity of the intelligent umbrella.
  • the computer-readable instructions stored in the one or more memory modules and executable by the one or more processors capture audible sounds via the one or more microphones, and generate one or more audio command files; and perform noise reduction processing on the one or more audio command files utilizing one of the first noise audio files or the second audio files to generate one or more noise-reduced audio command files.
  • the computer-readable instructions stored in the one or more memory modules and executable by the one or more processors communicate the one or more noise- reduced audio command files, via the one or more wireless communications transceivers, to the intelligent umbrella for voice recognition processing.
  • the computer-readable instructions stored in the one or more memory modules and executable by the one or more processors perform voice recognition on the one or more noise-reduced audio files to generate one or more command files.
  • the computer-readable instructions stored in the one or more memory modules and executable by the one or more processors communicate the one or more command files to the intelligent umbrella to cause assemblies to act based at least in part on the audible commands.
  • the computer- readable instructions stored in the one or more memory modules and executable by the one or more processor capture a first time identifier for the first noise audio file and a second time identifier for the second noise audio file; and store the first time identifier in a database record with the one or more first noise audio files and the second time identifier in a database record with the one or more second noise audio files.
  • the computer-readable instructions stored in the one or more memory modules and executable by the one or more processors capture a current time identifier for a time subsequent to the first time identifier and the second time identifier; capture audible sounds via the one or more microphones, and generate one or more audio command files; retrieve one or more noise audio files from at least the first noise audio file or the second audio file, the one or more retrieved noise audio files having a noise identifier closest to the current noise identifier; and perform noise reduction processing on the one or more audio command files utilizing the retrieved noise audio files to generate one or more noise-reduced audio command files.
  • the computer-readable instructions stored in the one or more memory modules and executable by the one or more processors capture a first type identifier for the first noise audio files and a second type identifier for the second noise audio files; and store the first type identifier in a database record with the first noise audio files and the second type identifier in a database record with the second noise audio files.
  • the computer-readable instructions stored in the one or more memory modules and executable by the one or more processors capture a current noise type identifier for a time subsequent to the first type identifier and the second type identifier, capture audible sounds via the one or more microphones, and generate one or more audio command files; retrieve one or more noise audio files from at least the first noise audio files and the second audio files, the retrieved one or more noise audio files having a noise type identifier closest to the current noise type identifier; and perform noise reduction processing on the one or more audio command files utilizing the retrieved one or more noise audio files to generate one or more noise-reduced audio command files.
  • the computer-readable instructions stored in the one or more memory modules and executable by the one or more processors sample the first noise audio files to generate first noise audio samples; and sample the second noise audio files to generate second noise audio samples; and store the first noise audio samples and the second noise audio samples.
  • the first noise audio samples comprise a plurality of sample amplitudes and associated times or wavelength and the second noise audio samples comprise a plurality of sample amplitudes and associated times or wavelengths.
  • Still another embodiment involves a computer-readable medium comprising processor-executable instructions configured to implement one or more of the techniques presented herein.
  • a computer-readable medium e.g., a CD-R, DVD-R, or a platter of a hard disk drive
  • This computer-readable data in turn comprises a set of computer instructions configured to operate according to one or more of the principles set forth herein.
  • the processor-executable instructions may be configured to perform a method, such as described therein.
  • Many such computer-readable media may be devised by those of ordinary skill in the art that are configured to operate in accordance with the techniques presented herein.
  • a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
  • an application running on a controller and the controller can be a component.
  • One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
  • the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter.
  • article of manufacture as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media.
  • Example computing devices include, but are not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like), multiprocessor systems, consumer electronics, mini computers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • mobile devices such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like
  • multiprocessor systems consumer electronics, mini computers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • Computer readable instructions may be distributed via computer readable media (discussed below).
  • Computer readable instructions may be implemented as program modules, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types.
  • APIs Application Programming Interfaces
  • first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer or section. Terms such as “first,” “second,” and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments.
  • module may refer to, be part of, or include: an Application Specific Integrated Circuit (ASIC); an electronic circuit; a combinational logic circuit; a field programmable gate array (FPGA); a processor or a distributed network of processors (shared, dedicated, or grouped) and storage in networked clusters or datacenters that executes code or a process; other suitable components that provide the described functionality; or a combination of some or all of the above, such as in a system-on-chip.
  • the term module may also include memory (shared, dedicated, or grouped) that stores code executed by the one or more processors.
  • code may include software, firmware, byte-code and/or microcode, and may refer to programs, routines, functions, classes, and/or objects.
  • shared means that some or all code from multiple modules may be executed using a single (shared) processor. In addition, some or all code from multiple modules may be stored by a single (shared) memory.
  • group means that some or all code from a single module may be executed using a group of processors. In addition, some or all code from a single module may be stored using a group of memories.
  • the techniques described herein may be implemented by one or more computer programs executed by one or more processors.
  • the computer programs include processor-executable instructions that are stored on a non-transitory tangible computer readable medium.
  • the computer programs may also include stored data.
  • Non-limiting examples of the non-transitory tangible computer readable medium are nonvolatile memory, magnetic storage, and optical storage.
  • Certain aspects of the described techniques include process steps and instructions described herein in the form of an algorithm. It should be noted that the described process steps and instructions could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by real time network operating systems.
  • the present disclosure also relates to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored on a computer readable medium that can be accessed by the computer.
  • a computer program may be stored in a tangible computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read- only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.

Landscapes

  • Telephone Function (AREA)

Abstract

Un parapluie intelligent comprend un ensemble d'extension de projection d'ombre, un ensemble de support, accouplé à l'ensemble d'extension de projection d'ombre, pour fournir un support à l'ensemble d'extension de projection d'ombre et un ensemble de base, accouplé à l'ensemble de support, pour créer un contact avec une surface. Le parapluie intelligent peut en outre comprendre un ou plusieurs émetteurs-récepteurs de communication sans fil, un ou plusieurs microphones pour capturer des instructions audibles, un ou plusieurs modules de mémoire, un ou plusieurs processeurs, et des instructions lisibles par ordinateur stockées dans lesdits au moins un module de mémoire sont exécutées par un processeur pour convertir les instructions audibles capturées en un ou plusieurs fichiers audio et effectuer une réduction de bruit ou une annulation de bruit sur lesdits un ou plusieurs fichiers audio afin de générer un ou plusieurs fichiers audio à bruit réduit.
PCT/US2018/047010 2017-08-19 2018-08-19 Parapluies intelligents et systèmes robotiques WO2019040343A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/681,377 2017-08-19
US15/681,377 US20180190257A1 (en) 2016-12-29 2017-08-19 Intelligent Umbrellas and/or Robotic Shading Systems Including Noise Cancellation or Reduction

Publications (1)

Publication Number Publication Date
WO2019040343A1 true WO2019040343A1 (fr) 2019-02-28

Family

ID=65439231

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/047010 WO2019040343A1 (fr) 2017-08-19 2018-08-19 Parapluies intelligents et systèmes robotiques

Country Status (1)

Country Link
WO (1) WO2019040343A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114211508A (zh) * 2021-12-21 2022-03-22 重庆特斯联智慧科技股份有限公司 便民化服务多功能机器人及其站点

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130073283A1 (en) * 2011-09-15 2013-03-21 JVC KENWOOD Corporation a corporation of Japan Noise reduction apparatus, audio input apparatus, wireless communication apparatus, and noise reduction method
US20160326765A1 (en) * 2015-05-07 2016-11-10 Scott Barbret Systems and methods for providing a portable weather, hydration, and entertainment shelter
US20160338457A1 (en) * 2015-05-22 2016-11-24 Shadecraft Llc Intelligent Shading Objects

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130073283A1 (en) * 2011-09-15 2013-03-21 JVC KENWOOD Corporation a corporation of Japan Noise reduction apparatus, audio input apparatus, wireless communication apparatus, and noise reduction method
US20160326765A1 (en) * 2015-05-07 2016-11-10 Scott Barbret Systems and methods for providing a portable weather, hydration, and entertainment shelter
US20160338457A1 (en) * 2015-05-22 2016-11-24 Shadecraft Llc Intelligent Shading Objects

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114211508A (zh) * 2021-12-21 2022-03-22 重庆特斯联智慧科技股份有限公司 便民化服务多功能机器人及其站点
CN114211508B (zh) * 2021-12-21 2024-05-07 重庆特斯联智慧科技股份有限公司 便民化服务多功能机器人及其站点

Similar Documents

Publication Publication Date Title
US10118671B2 (en) Marine vessel with intelligent shading system
US10813422B2 (en) Intelligent shading objects with integrated computing device
US10819916B2 (en) Umbrella including integrated camera
US10554436B2 (en) Intelligent umbrella and/or robotic shading system with ultra-low energy transceivers
US20200113297A1 (en) Automated umbrella
US10349493B2 (en) Artificial intelligence (AI) computing device with one or more lighting elements
US10455395B2 (en) Shading object, intelligent umbrella and intelligent shading charging security system and method of operation
US20180190257A1 (en) Intelligent Umbrellas and/or Robotic Shading Systems Including Noise Cancellation or Reduction
US20190078347A1 (en) Intelligent umbrella or robotic shading system including rodent repelling device and/or insect disabling device
US10488834B2 (en) Intelligent umbrella or robotic shading system having telephonic communication capabilities
US20200063461A1 (en) Automatic operation of automation attachment and setting of device parameters
US20180291579A1 (en) Snow/Ice Melting Drone Device
US20190294131A1 (en) Computer-Readable Instructions Executable by a Processor to Operate an Umbrella
US10912357B2 (en) Remote control of shading object and/or intelligent umbrella
US20190292805A1 (en) Intelligent umbrella and integrated audio subsystem and rack gear assembly
US10519688B2 (en) Apparatus and method for identifying operational status of umbrella, parasol or shading system utilizing lighting elements
CN112735403B (zh) 一种基于智能音响的智能家居控制系统
CN109279016A (zh) 无人机数字语音播报系统
WO2019040343A1 (fr) Parapluies intelligents et systèmes robotiques
US11611314B2 (en) Solar panel performance modeling and monitoring
WO2018195262A1 (fr) Système d'ombrage intelligent avec ensemble de base mobile
WO2019033069A1 (fr) Commande de plusieurs parasols intelligents et/ou systèmes d'ombrage robotiques

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18848851

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18848851

Country of ref document: EP

Kind code of ref document: A1