U.S. patent application number 15/823404 was filed with the patent office on 2018-11-15 for computing device or artificial intelligence (ai) device including shading element or shading system.
This patent application is currently assigned to Shadecraft, Inc.. The applicant listed for this patent is Shadecraft, Inc.. Invention is credited to Armen Sevada Gharabegian.
Application Number | 20180329375 15/823404 |
Document ID | / |
Family ID | 64096088 |
Filed Date | 2018-11-15 |
United States Patent
Application |
20180329375 |
Kind Code |
A1 |
Gharabegian; Armen Sevada |
November 15, 2018 |
Computing Device or Artificial Intelligence (AI) Device Including
Shading Element or Shading System
Abstract
An apparatus provides protection to a computing device housing.
The computing device housing comprises one or more microphones, the
one or more microphones to capture audio sounds; one or more
processors; one or more memory modules; and computer-readable
instructions stored in the one or more memory modules. The
apparatus includes a support assembly connected to a top surface of
the computing device housing and a shading assembly connected to an
end of the support assembly, the shading assembly to provide shade
from environmental conditions to the computing device housing. The
computer-readable instructions are executed by the one or more
processors to convert the captured audio sounds from the one or
more microphones to one or more audio files and to communicate the
one or more audio files, via the one or more wireless transceivers,
to an external computing device, for voice recognition and
conversion into command files.
Inventors: |
Gharabegian; Armen Sevada;
(Glendale, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Shadecraft, Inc. |
Pasadena |
CA |
US |
|
|
Assignee: |
Shadecraft, Inc.
|
Family ID: |
64096088 |
Appl. No.: |
15/823404 |
Filed: |
November 27, 2017 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62505910 |
May 13, 2017 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
E04F 10/10 20130101;
G05B 15/02 20130101; G06F 3/167 20130101; E04H 15/58 20130101; E04F
10/06 20130101; H04M 1/18 20130101; H04M 1/21 20130101; H02S 99/00
20130101; E04F 10/04 20130101; E04H 15/28 20130101; G10L 15/22
20130101; G06N 5/04 20130101; H02S 40/38 20141201; H04W 84/12
20130101; G10L 2015/223 20130101; E04F 10/02 20130101; E04F 10/00
20130101; G10L 15/28 20130101; H02S 20/30 20141201; E04H 15/02
20130101; H02S 20/32 20141201 |
International
Class: |
G05B 15/02 20060101
G05B015/02; E04H 15/02 20060101 E04H015/02; G10L 15/22 20060101
G10L015/22 |
Claims
1. An apparatus to provide shade, comprising: a computing device
housing, the computing device housing comprising: one or more
microphones, the one or more microphones to capture audio sounds;
one or more processors; one or more memory modules; and
computer-readable instructions stored in the one or more memory
modules; a support assembly connected to a top surface of the
computing device housing; and a shading assembly connected to an
end of the support assembly, the shading assembly to provide shade
from environmental conditions to the computing device housing,
wherein the computer-readable instructions to be executable by the
one or more processors to convert the captured audio sounds from
the one or more microphones to one or more audio files.
2. The apparatus of claim 1, further comprising one or more
wireless transceivers, where the computer-readable instructions are
executed by the one or more processors to communicate the one or
more audio files, via the one or more wireless transceivers, to an
external computing device for voice recognition and conversion into
command files.
3. The apparatus of claim 2, wherein the computer-readable
instructions are executed by the one or more processors to receive
the command files via the one or more wireless transceivers from
the external computing device to control operation of one or more
assemblies or components of the computing device housing.
4. The apparatus of claim 3, wherein the one or more assemblies or
the components are one or more environmental sensors, and wherein
the computer-readable instructions are executed by the one or more
processors to instruct the one or more environmental sensors to
capture environmental measurements and communicate the captured
environmental measurements.
5. The apparatus of claim 4, wherein the computer-readable
instructions to be executed by the one or more processors to
analyze the captured environmental measurements, generate commands
to control operation of another assembly or component in the
computing device housing and to communicate the generated commands
to the another assembly or component.
6. The apparatus of claim 3, wherein the one or more assemblies or
the components are one or more directional sensors, wherein the
computer-readable instructions are executed by the one or more
processors to instruct the one or more directional sensors to
capture directional measurements and communicate the captured
direction measurements.
7. The device of claim 3, wherein the one or more assemblies or
components are one or more imaging devices, and wherein the
computer-readable instructions are executed by the one or more
processors to activate the one or more imaging devices to capture
video and/or audio of an environment surrounding the computing
device and to communicate the captured video and/or audio to a
portable computing device, via one or more of the wireless
transceivers.
8. The apparatus of claim 3, further comprising one or more cameras
and one or more motion sensors, the one or more motion sensors to
detect movement in an area around the apparatus, to generate and
communicate signals to the one or more processors, wherein the one
or more computer-readable instructions are executed by the one or
more processors to activate the one or more cameras based, at least
in part, on the detected movement.
9. The apparatus of claim 1, further comprising a rotation
assembly, the rotation assembly connected to the computing device
housing and a support assembly, the rotation assembly to rotate the
shading support and the shading element with respect to the
computing device housing.
10. The apparatus of claim 1, further comprising a first hinging
assembly, a second hinging assembly, and an additional support
assembly, the additional support assembly connected to the shading
element and connected to the computing device housing via the
second hinging assembly, the support assembly connected to the
computing device housing via the first hinging assembly.
11. The apparatus of claim 1, further comprising a rotation
assembly and a hinging assembly, the shading support further
comprising an upper shading section and a lower shading section,
the rotation assembly connecting the shading device housing to the
lower shading section and the hinging assembly connecting the lower
shading section to the upper shading section, the rotation assembly
to cause the shading support and the shading element to rotate in
an azimuth direction with respect to the device housing, the
hinging assembly to cause the upper shading section to rotate with
respect to the lower shading support.
12. The apparatus of claim 1, wherein the computer-readable
instructions are executed by the one or more processors to receive
the one or more voice files, and perform voice recognition on the
one or more voice files to generate command files.
13. The apparatus of claim 12, wherein the computer-readable
instructions are further executed by the one or more processors to
communicate the command files to the one or more processors to
control operation of one or more assemblies or components of the
computing device housing.
14. The apparatus of claim 1, the shading element further
comprising one or more photovoltaic cells to convert sunlight into
electrical power.
15. The apparatus of claim 1, the computing device hosing further
comprising one or more photovoltaic cells to convert sunlight into
electrical power.
16. The apparatus of claim 14, further comprising a charging
assembly and a rechargeable power source, the charging assembly to
receive the electrical power and to transfer the electrical power
to the rechargeable power source, the rechargeable power source to
provide voltage or current, or both to components and/or assemblies
in the shading device housing.
17. An apparatus to provide shade, comprising: a base assembly to
contact a surface; a computing device housing, the computing device
housing comprising: one or more microphones, the one or more
microphones to capture audio sounds; one or more processors; one or
more memory modules; and computer-readable instructions stored in
the one or more memory modules; one or more support assemblies
connected to a top surface of the computing device housing and a
shading element connected to the one or more shading supports the
shading assembly to provide protection from environmental
conditions to the computing device housing, a rotation assembly
connected to the base assembly and connected to the computing
device housing, the rotation assembly to cause rotation of the
computing device housing with respect to the base assembly; wherein
the computer-readable instructions executable by the one or more
processors to convert the captured audio sounds from the one or
more microphones to one or more audio files.
18. The apparatus of claim 17, further comprising one or more
wireless transceivers, wherein the computer-readable instructions
are executed by the one or more processors to communicate the one
or more audio files, via the one or more wireless transceivers, to
an external computing device for voice recognition and conversion
into command files, to receive the converted command files from the
external computing device, and wherein the converted commands files
comprise instructions to cause the rotation assembly to move or
rotate the computing device housing with respect to the base
assembly.
19. An apparatus to provide shade, comprising: a computing device
housing, the computing device housing comprising: one or more
microphones, the one or more microphones to capture audio sounds;
one or more processors; one or more memory modules; and
computer-readable instructions stored in the one or more memory
modules; a support assembly connected to a top surface of the
computing device housing; one or more wireless transceivers to
communicate with a mobile computing device; and a shading assembly
connected to an end of the support assembly, the shading assembly
to provide shade from environmental conditions to the computing
device housing, wherein the computer-readable instructions
executable by the one or more processors to receive audio files
captured by one or more microphones of the mobile computing
device.
20. The apparatus of claim 19, where the computer-readable
instructions are executed by the one or more processors to
communicate the one or more audio files, via the one or more
wireless transceivers, to an external computing device for voice
recognition and conversion into command files.
Description
RELATED APPLICATIONS
[0001] This application claims priority to provisional application
Ser. No. 62/505,910, filed May 13, 2017, entitled "Artificial
Intelligence (AI) Computing Device with Shading System," the
disclosure of which is incorporated by reference.
BACKGROUND
1. Field
[0002] The subject matter disclosed herein relates to an artificial
intelligence device or a computing device that comprises a housing
and a shading system.
2. Information/Background of the Invention
[0003] Conventional artificial intelligence computing devices have
limitations based on being utilized indoors. Indoor AI computing
devices cannot operate in outdoor environments because they are not
protected from environmental conditions such as wind, rain, sun
and/or air quality factors (e.g., smoke, carbon monoxide, etc.).
Accordingly, a need exists for AI computing devices that may be
utilized in outdoor environments.
BRIEF DESCRIPTION OF DRAWINGS
[0004] Non-limiting and non-exhaustive aspects are described with
reference to the following figures, wherein like reference numerals
refer to like parts throughout the various figures unless otherwise
specified.
[0005] FIG. 1 illustrates an apparatus including artificial
intelligence device or computing device with shading system
according to embodiments;
[0006] FIG. 1A illustrates a block diagram of components utilized
to provide power in an apparatus including an AI device or
computing device and a shading system;
[0007] FIG. 2 illustrates an apparatus including an AI device or
computing device and shading system with an adjustable shading
supports according to embodiments;
[0008] FIG. 3 illustrates an apparatus including AI device or
computing device and shading system with a hinging support assembly
according to embodiments;
[0009] FIG. 4 illustrates a microphone and/or LED array in an AI
device housing or computing device housing according to
embodiments;
[0010] FIG. 5 illustrates a block and dataflow diagram of
communications between an AI device or computing device and shading
system according to embodiments;
[0011] FIG. 6 illustrates a block diagram of components and
assemblies for rotating an AI device housing or computing device
housing body about a base assembly;
[0012] FIG. 7A illustrates an apparatus including an AI Device or
computing device with Shading System with a movable base assembly
according to embodiments;
[0013] FIG. 7B is a flowchart illustrating base assembly movement
according to voice commands according to embodiments;
[0014] FIG. 7C illustrates movement of a base assembly according to
sensor measurements according to embodiments; and
[0015] FIG. 7D illustrates movement of a base assembly utilizing a
camera and/or pattern recognition and/or image processing according
to embodiments; and
[0016] FIG. 8 illustrates a computing device and/or electronic
device according to embodiments.
DETAILED DESCRIPTION
[0017] In the following detailed description, numerous specific
details are set forth to provide a thorough understanding of
claimed subject matter. For purposes of explanation, specific
numbers, systems and/or configurations are set forth, for example.
However, it should be apparent to one skilled in the relevant art
having benefit of this disclosure that claimed subject matter may
be practiced without specific details.
[0018] References throughout this specification to one
implementation, an implementation, one embodiment, embodiments, an
embodiment and/or the like means that a particular feature,
structure, and/or characteristic described in connection with a
particular implementation and/or embodiment is included in at least
one implementation and/or embodiment of claimed subject matter.
Thus, appearances of such phrases, for example, in various places
throughout this specification are not necessarily intended to refer
to the same implementation or to any one particular implementation
described. Furthermore, it is to be understood that particular
features, structures, and/or characteristics described are capable
of being combined in various ways in one or more implementations
and, therefore, are within intended claim scope, for example.
[0019] It is typical to employ distributed computing approaches,
where computing is allocated among computing devices, including one
or more clients and/or one or more servers, via a computing and/or
communications network, for example. A network may comprise two or
more network devices and/or may couple network devices so that
signal communications, such as between a server and a client device
and/or other types of devices, including between wireless devices
coupled via a wireless network, for example. A network may comprise
two or more network and/or computing devices and/or may couple
network and/or computing devices so that communications may be
exchanged, such as between a server and a client device and/or
other types of devices, including between wireless devices coupled
via a wireless network, for example. In this context, the term
network device refers to any device capable of communicating via
and/or as part of a network and may comprise a computing
device.
[0020] Computing devices, mobile computing devices, and/or network
devices capable of operating as a server, or otherwise, may
include, as examples, rack-mounted servers, desktop computers,
laptop computers, set top boxes, tablets, netbooks, smart phones,
wearable devices, integrated devices combining two or more features
of the foregoing devices, single-board computers, the like or any
combination thereof. It is noted that the terms, server, servers,
server device, server computing device, application server, cloud
servers, server network devices, and/or similar terms are used
interchangeably. Similarly, the terms client, client device, client
computing device, clients, and/or similar terms are also used
interchangeably. These terms may be used in the singular, such as
by referring to a "client device" or a "server device," and may be
intended to encompass one or more client devices and/or one or more
server devices, as appropriate. References to a "database" or
"databases" are understood to mean, one or more databases, database
servers, application data servers, database cloud servers, proxy
servers, and/or portions thereof, as appropriate.
[0021] Operations and/or processing, such as in association with
networks, such as computing and/or communications networks, for
example, may involve physical manipulations of physical quantities
(electrical, magnetic and/or optical signals). These signals may be
utilized as bits, data, values, elements, symbols, characters,
terms, numbers, numerals and/or the like.
[0022] Likewise, in this context, the terms "coupled", "connected,"
and/or similar terms are used generically. It should be understood
that these terms are not intended as synonyms. Rather, "connected"
is used generically to indicate that two or more components, for
example, are in direct physical, including direct electrical,
contact; while, "coupled" is used generically to mean that two or
more components are potentially in direct physical, including
electrical, contact; however, "coupled" is also used generically to
also mean that two or more components are not necessarily in direct
contact, but nonetheless are able to co-operate and/or interact
(also indirectly coupled). The term "coupled" is also understood
generically to mean indirectly connected, for example, in an
appropriate context. If signals, messages and/or commands are
transmitted from one component (or assembly to another component
(or assembly), it is understood that messages, signals,
instructions, and/or commands may be transmitted directly to a
component, or may pass through a number of other components on a
way to a destination component. For example, a signal transmitted
from a motor controller to a motor (or other driving assembly) may
pass through glue logic, an amplifier, an analog-to-digital
converter, a digital-to-analog converter, another controller and/or
processor, and/or an interface. Similarly, a signal communicated
through a misting system may pass through an air conditioning
and/or a cooing module, and a signal communicated from any one or a
number of sensors to a controller and/or processor may pass through
a conditioning module, an analog-to-digital controller, and/or a
comparison module, and/or a number of other electrical assemblies
and/or components.
[0023] The terms, "and", "or", "and/or" and/or similar terms, as
used herein, include a variety of meanings that also are expected
to depend at least in part upon the particular context in which
such terms are used. Typically, "or" if used to associate a list,
such as A, B or C, is intended to mean A, B, and C, here used in
the inclusive sense, as well as A, B or C, here used in the
exclusive sense. In addition, the term "one or more" and/or similar
terms is used to describe any feature, structure, and/or
characteristic in the singular and/or is also used to describe a
plurality and/or some other combination of features, structures
and/or characteristics.
[0024] Likewise, the term "based on," "based, at least in part on,"
and/or similar terms (e.g., based at least in part on) are
understood as not necessarily intending to convey an exclusive set
of factors, but to allow for existence of additional factors not
necessarily expressly described. Claimed subject matter is not
limited to these one or more illustrative examples; however, again,
particular context of description and/or usage provides helpful
guidance regarding inferences to be drawn.
[0025] A network may also include for example, past, present and/or
future large storage devices, such as network storage, cloud
storage, storage networks, cloud storage, cloud server farms,
and/or other forms of computing and/or device readable media, for
example. A network may include a portion of the Internet, one or
more local area networks (LANs), one or more wide area networks
(WANs), wire-line type connections, one or more personal area
networks (PANs), wWANs, wireless type connections, one or more mesh
networks, one or more cellular communication networks, other
connections, or any combination thereof. Thus, a network may be
worldwide in scope and/or extent.
[0026] The Internet and/or a global communications network may
refer to a decentralized global network of interoperable networks
that comply with the Internet Protocol (IP). It is noted that there
are several versions of the Internet Protocol. Here, the term
Internet Protocol, IP, and/or similar terms, is intended to refer
to any version, now known and/or later developed of the Internet
Protocol. The Internet may include local area networks (LANs), wide
area networks (WANs), wireless networks, and/or long haul public
networks that, for example, may allow signal packets and/or frames
to be communicated between LANs. The term World Wide Web (WWW or
Web) and/or similar terms may also be used, although it refers to a
part of the Internet that complies with the Hypertext Transfer
Protocol (HTTP) or XML. It is likewise noted that in various places
there is substitution of the term Internet with the term World Wide
Web (Web').
[0027] Although claimed subject matter is not in particular limited
in scope to the Internet and/or to the Web; nonetheless, the
Internet and/or the Web may without limitation provide a useful
example of an embodiment at least for purposes of illustration. As
indicated, the Internet and/or the Web may comprise a worldwide
system of interoperable networks, including interoperable devices
within those networks. A HyperText Markup Language ("HTML"),
Cascading Style Sheets ("CSS") or Extensible Markup Language
("XML"), for example, may be utilized to specify content and/or to
specify a format for hypermedia type content, such as in the form
of a file and/or an "electronic document," such as a Web page, for
example. HTML and/or XML are merely example languages and are not
intended to be limited to examples provided as illustrations, of
course.
[0028] One or more parameters may be descriptive of a collection of
physical signals and/or physical states. For example, one or more
parameters, such as referring to an electronic document comprising
an image, may include parameters, such as 1) time of day at which
an image was captured, latitude and longitude of an image capture
device, such as a camera; 2) time and day of when a sensor reading
(e.g., humidity, temperature, air quality, UV radiation) was
received; and/or 3) operating conditions of one or more motors or
other components or assemblies in a modular umbrella shading
system. Claimed subject matter is intended to embrace meaningful,
descriptive parameters in any format.
[0029] Some portions of the detailed description which follow are
presented in terms of algorithms or symbolic representations of
operations on binary digital signals stored within a memory of a
specific apparatus or special purpose computing device or platform.
In the context of this particular specification, the term specific
apparatus or the like includes a general purpose computer once it
is programmed to perform particular functions pursuant to
instructions from program software. In embodiments, a computing
device may be installed within or as part of an artificial
intelligence system having a shading element or structure.
Algorithmic descriptions or symbolic representations are examples
of techniques used by those of ordinary skill in the signal
processing or related arts to convey the substance of their work to
others skilled in the art. An algorithm is here, and generally,
considered to be a self-consistent sequence of operations or
similar signal processing leading to a desired result. In this
context, operations or processing involve physical manipulation of
physical quantities.
[0030] Utilization of terms in the specification such as
"processing," "computing," "calculating," "determining" or the like
may refer to actions or processes of a specific apparatus, such as
a special purpose computer or a similar special purpose electronic
computing device (e.g., such as an artificial intelligence
computing device). In the context of this specification, therefore,
a special purpose computer or a similar special purpose electronic
computing device (e.g., an AI computing device) is capable of
manipulating or transforming signals (electronic and/or magnetic)
in memories (or components thereof), other storage devices,
communication devices, transmission or transceiver devices, sound
reproduction devices, and/or display devices.
[0031] In an embodiment, a controller and/or a processor typically
performs a series of instructions resulting in data manipulation.
In an embodiment, a microcontroller or microprocessor may be a
compact microcomputer designed to govern the operation of embedded
systems in electronic devices, e.g., an AI computing device with a
shading element and/or shading structure, an AI device with a
shading element, and various other electronic and mechanical
devices coupled thereto or installed thereon. Microcontrollers may
include processors, microprocessors, and other electronic
components. Controller may be a commercially available processor
such as an Intel Pentium, Motorola PowerPC, SGI MIPS, Sun
UltraSPARC, Qualcomm Snapdragon processor, or Hewlett-Packard
PA-RISC processor, but may be any type of application-specific
and/or specifically designed processor or controller. In an
embodiment, a processor and/or controller may be connected to other
system elements, including one or more memory devices, by a bus, a
mesh network, serial communication networks, wireless networks or
other mesh components. Usually, a processor or controller, may
execute an operating system which may be, for example, a
Windows-based operating system (Microsoft), a MAC OS System X
operating system (Apple Computer), one of many open source
operating system), a Solaris operating system (Sun), a portable
electronic device operating system (e.g., mobile phone operating
systems, iOS, Android, Microsoft Phone, etc.), microcomputer
operating systems, single board computer operating systems,
wearable device operating systems, and/or a UNIX operating systems.
Embodiments are not limited to any particular implementation and/or
operating system.
[0032] The specification may refer to an artificial intelligence
(AI) device or computing device having a shading element or
structure as an apparatus that allows an operator or user to
verbally or audibly interface with it and having a shading element
and/or shading structure to provide shade and/or provide coverage
to an AI device (and potentially an operator) from weather elements
such as sun, wind, rain, and/or hail. In embodiments, a shading
element and/or shading structure may further comprise solar cells
and/or solar arrays to generate power for operation of the AI
system with shading element. In embodiments, the shading element or
shading structure may be a simple shading fabric, or a shading
frame and shading fabric. In embodiments, the shading element or
shading structure may be an automated and/or intelligent and may
respond to commands, instructions and/or signals audibly spoken by
a user/operator or generated by a processor upon execution of
computer-readable instructions. The shading system, shading
structure and/or shading element may be referred to as a parasol,
an umbrella, a sun shade, sun screen, sun shelter, awning, sun
cover, sun marquee, brolly and other similar names, which may all
be utilized interchangeably in this application. These terms may be
utilized interchangeably throughout the specification. In
embodiments, a shading element or shading structure may be
connected to an AI device housing via a shading support, central
support assembly, a stem assembly, and/or tube.
[0033] FIG. 1 illustrates an apparatus including artificial
intelligence device or computing device with shading system
according to embodiments. An artificial intelligence (AI) device or
computing device having a shading system may comprise a shading
frame and/or fabric 103, a shading support assembly 105, and an AI
device housing (or computing device housing) 108. In embodiments,
an AI device may be referenced. Descriptions herein apply to either
AI devices or computing devices. Although AI devices may be
referenced, the features and functions described may apply to
computing devices. In embodiments, an AI device may comprise voice
recognition or other AI computer-readable instructions stored in a
memory and executable by one or more processors. The
computer-readable instructions may be a complete AI engine and/or
an AI application programming interface.
[0034] In embodiments, a shading element or shade 103 may provide
shade to keep an AI shading device housing (or computing device
housing) 108 from overheating and/or protect it from other
environmental conditions (e.g., rain, sleet, snow, etc.). In
embodiments, an AI device or computing device housing 108 may be
coupled and/or connected to a shading support 105. In embodiments,
a shading system may refer to one or more shading supports 104 and
one or more shading elements or shades 103. In embodiments, a
shading support 105 may be coupled to an AI device or computing
device housing 108. In embodiments, a shading support 105 may
support a shade or shading element 103 and move it into position
with respect to an AI shading device housing 108. In this
illustrative embodiment of FIG. 1, an AI shading device housing 108
may be utilized as a base, mount and/or support for a shading
element or shade 103. In embodiments, a shading support 105 may be
simple and may not have a tilting assembly and/or may not be
adjustable. In embodiments, a shading support 105 may be simplified
and not have many electronics, components and/or assemblies
installed and/or positioned therein. In embodiments, a shading
support 105 may also not include an expansion and sensor assembly.
Illustratively, in embodiments, a shading support 105 may not
comprise an integrated computing device, may not have lighting
assemblies and/or may not have sensors installed therein and/or
positioned thereon. In embodiments, a shading element or shade 103
or a shade support 105 may comprise one or more sensors (e.g.,
environmental sensors 121, directional sensors 122 and/or proximity
sensors 123). For example, in embodiments, sensors may be a
temperature sensor, a wind sensor, a humidity sensor, an air
quality sensor, and/or an ultraviolet radiation sensor. In
embodiments, a shading element or shade 103, and/or a shade support
assembly 105 may comprise one or more imaging devices 126 (e.g.,
cameras). In embodiments, a shading support may not include an
audio system (e.g., a speaker 153 and/or an audio/video transceiver
152) and may not include lighting assemblies. In embodiments, a
shading housing 108 may not include one or more lighting
assemblies, one or more imaging devices, one or more sensors,
and/or one or more integrated computing devices. In embodiments, an
AI shading housing 108 may comprise one or more lighting
assemblies, one or more imaging devices, one or more sensors,
and/or one or more integrated computing devices.
[0035] In embodiments, an AI device or computing device housing 108
may comprise an integrated computing device 120. In embodiments, an
AI device or computing device housing 108 may comprise one or more
processors/controllers 127, one or more memory modules 128, one or
more microphones (or audio receiving devices) 129, one or more PAN
transceivers 130 (e.g., Bluetooth transceivers), one or more
wireless transceivers 131 (e.g., WiFi or other 802.11
transceivers), and/or one or more cellular transceivers 132 (e.g.,
EDGE transceiver, 4G, 3G, CDMA and/or GSM transceivers). In
embodiments, the processors 127, memory 128, transceivers 130 131
132 and/or microphones 129 may be integrated into a computing
device 120, where in other embodiments, a single-board computing
device 120 (e.g., Raspberry Pi) may not be utilized and processors
127 and/or memory devices 128 may be installed separately within an
AI Device or computing device Housing 108. In embodiments, one or
more memory modules 128 may contain computer-readable instructions
140, the computer-readable instructions 140 being executed by one
or more processors/controllers 127 to perform certain
functionality. In embodiments, the computer-readable instructions
may comprise an artificial intelligence application programming
interface (API) 141. In embodiments, an artificial intelligence API
141 may allow communications and/or interfacing between an AI
device housing 108 and a third party artificial intelligence (AI)
engine housed in a local and/or remote server and/or computing
device 150. In embodiments, an AI API 141 may comprise or include a
voice recognition AI API, which may be able to communicate sound
files (e.g., analog or digital sound files) to a third party voice
recognition AI server 150. In embodiments, a voice recognition AI
server may be an Amazon Alexa, Echo, Echo Dot and/or a Google Now
server or other third party voice recognition AI servers. In
embodiments, an AI engine and/or an AI voice recognition (e.g.,
computer-readable instructions 140 stored in one or more memories
128 and executed by one or more processors 128 performing AI
functions and/or AI voice recognition functions) may be resident on
an AI device housing 108 and a third party AI server and/or voice
recognition engine may not be utilized. In embodiments, as
discussed previously a computing device housing may contain similar
components to an AI device housing as described in FIGS. 1 to
3.
[0036] In embodiments, solar cells and/or solar arrays may be
mounted on and/or integrated into a shading element or shade 103.
FIG. 1A illustrates a block diagram of components utilized to
provide power in an apparatus including an AI device or computing
device and a shading system. In embodiments, solar cells and/or
solar arrays or photovoltaic (PV) cells 191 may generate solar
energy from a sun and convert the solar energy into electrical
energy (e.g., voltage and/or current) or electrical power. In
embodiments, electrical energy or electrical power generated by one
or more solar cells, solar cell arrays and/or PV cells 191 may
charge and/or provide power to one or more rechargeable power
sources (e.g., a rechargeable battery) 193 in an AI or computing
device housing 108 (although a rechargeable battery may be
positioned within or located within a shading support 105 and/or
shading element 103). In embodiments, one or more charging
assemblies 192 may receive electrical energy from one or more solar
cells or PV cells 191 and transfer the electrical energy or
electrical power to a rechargeable power source 193 or battery. In
embodiments, one or more rechargeable power sources or batteries
193 in an AI device housing 108 may provide power to components
(e.g., transceivers, sensors, processors, and/or microphones, etc.
194) and/or assemblies 195 (e.g., motors or motor assemblies) in an
AI device housing 108, a shading support 105 and/or shading element
103. In embodiments, an AI device or computing device housing 108
may also receive power from an AC power source. In embodiments,
although FIG. 1A shows one or more rechargeable batteries 193
providing power to components 194 or assemblies, but in alternative
embodiments, one or more charging assemblies 192 and/or solar
cells, solar arrays and/or PV cells 191 may directly or indirectly
provide power (e.g., voltage and/or current) to components 194
and/or assemblies 195).
[0037] In embodiments, an AI device or computing device housing 108
may comprise one or more sensors. In embodiments, an AI device
housing 108 may comprise one or more environmental sensors 121, one
or more directional sensors 122 and/or one or more proximity
sensors 123. Although the one or more environmental sensors 121,
one or more directional sensors 122 and/or one or more proximity
sensors 123 are illustrated as being located on and/or within the
AI device housing 108, the sensors identified above may be located
on and/or integrated with a shading support 105 and/or a shade
element or shade 103. In environments, one or more environmental
sensors 121 may comprise one or more air quality sensors, one or
more UV radiation sensors, one or more digital and/or analog
barometers, one or more temperature sensors, one or more humidity
sensors, one or more light sensors, and/or one more wind speed
sensors. In embodiments, one or more directional sensors 122 may
comprise a digital compass, a compass, a GPS receiver, a gyroscope
and/or an accelerometer.
[0038] In embodiments, an environmental sensor 121 may comprise an
air quality sensor. In embodiments, an air quality sensor may
provide ozone measurements, particulate matter measurements, carbon
monoxide measurements, sulfur dioxide measurements and/or nitrous
oxide measurements. In embodiments, an air quality sensor may
provide allergen measurements. Ozone leads to intelligent readings
to tell an individual whether or not to go inside. In embodiments,
an air quality sensor may communicate measurements and/or readings
from an air quality sensor and may communicate these measurements
to an AI Device or computing device housing processor 127. In
embodiments, a processor 127, executing computer readable
instructions 140 stored in memory 128, may receive air quality
sensor measurements, analyze the measurements, store the
measurements and/or cause AI device and shading system assemblies
and/or components to react to air quality measurements. In
embodiments, for example, if an air quality is too low, e.g., as
compared to an existing threshold, one or more processors 127 may
communicate commands, instructions and/or signals to an audio
system 153 to alert a user of unsafe conditions by reproducing an
audible sound on a speaker. In embodiments, for example, ozone
measurements from an air quality sensor may be utilized to
determine an amount of time an individual should be outside, and
this amount of time may be communicated to an individual via a
sound system (communicated audibly), via a display and/or monitor
(displayed visually), and/or wirelessly to an external computing
device.
[0039] In embodiments, an AI device housing or computing device
housing 108 may comprise an ultraviolet (UV) radiation sensor. In
embodiments, a UV radiation sensor may provide discrete radiation
band measurements, including, but not limited to UVB, radiation,
UVA radiation, Infrared lighting, or a combination of any and all
of these radiation measurements. In embodiments, a UV radiation
sensor may communicate these measurements to a processor 127. In
embodiments, a processor 127 and computer-readable instructions 140
executed by the processor 127, may analyze received UV radiation
measurements. In embodiments, a processor 127 and computer-readable
instructions 140 executed by the processor 127 may utilize UV
radiation measurements received to determine and/or calculate an
amount of time an individual should be outside, and this amount of
time may be communicated to an individual via a sound system 153
and/or 152 (communicated audibly), via a display and/or monitor,
and/or wirelessly to an external computing device.
[0040] In embodiments, an environmental sensor 121 in an AI device
or computing device housing may comprise a digital barometer. In
embodiments, a digital barometer may provide, measure, and/or
display complex atmospheric data more accurately and quickly than
prior barometers. Many digital barometers display both current
barometric readings and previous 1-, 3-, 6-, and 12-hour readings
in a bar chart format, much like a barograph. They also account for
other atmospheric readings such as wind and humidity to make
accurate weather forecasts. In embodiments, a a digital barometer
may capture atmospheric data measurements and communicate these
measurements to a processor 127. In embodiments, for example,
computer-readable instructions 140 executed by processor 127 may
receive digital barometer measurements (e.g., altitude
measurements), analyze and/or process these measurements, and
determine necessary movements or actions for components and/or
assemblies of an AI device and shading system 100. In embodiments,
for example, computer-readable instructions 140 executed by
processor 127 may receive digital barometer measurements and
generate a weather forecast for an area being served by an AI
device and shading system 100.
[0041] In embodiments, an environmental sensor 121 may comprise a
temperature sensor. In embodiments, a temperature sensor may
generate and provide a temperature reading or measurement for an
environment where an AI device and shading system 100 is located.
In embodiments, a temperature sensor may communicate these
measurements to a processor 127. In embodiments, computer-readable
instructions 140 executed by a processor 127 may receive
temperature measurements, analyze the temperature measurements,
and/or, determine actions that should be provided to components
and/or assemblies of an AI device and shading system. In
embodiments, for example, computer-readable instructions executed
by a processor may determine and/or calculate an amount of time an
individual should be outside, and this amount of time may be
communicated to an individual via a sound system 152 or 153
(communicated audibly), via a display and/or monitor, and/or
wirelessly to an external computing device.
[0042] In embodiments, an environmental sensor may comprise a
humidity sensor. In embodiments, a humidity sensor may capture and
generate humidity measurements in an environment where an AI device
and shading system 100 is located. In embodiments, a humidity
sensor may communicate these measurements to a processor 127. In
embodiments, computer-readable instructions 140 executed by a
processor may receive humidity measurements, analyze humidity
measurements and determine actions that may be taken by components
and/or assemblies of an AI device and shading system 100. In
embodiments, for example, computer-readable instructions 140
executed by a processor 127 may be utilized to determine and/or
calculate an amount of time an individual should be outside, and
this amount of time may be communicated to an individual via a
sound system (communicated audibly), via a display and/or monitor,
and/or wirelessly to an external computing device. In embodiments,
computer-readable instructions 140 executable by a processor may
receive humidity sensor readings and/or temperature sensor readings
and determine that 1) an AI Device or computing device housing
should be turned off because the environment is too hot or humid or
2) a shade element 103 should be deployed to provide shade to the
AI device or computing device housing. In embodiments,
computer-readable instructions 140 executable by a processor 127
may generate commands, instructions and/or signals and communicate
the same to a shading element control system (e.g., a motor
controller, a motor and/or driving system) to deploy a shade
element 103.
[0043] In embodiments, an environmental sensor 121 may comprise a
wind sensor. In embodiments, a wind speed sensor may capture wind
speed and/or wind direction, generate wind speed and/or wind
direction measurements at an AI device and shading system. In
embodiments, a wind sensor may communicate these measurements to a
processor 127. In embodiments, computer-readable instructions 140
executable by a processor 127 may receive wind speed measurements,
analyze and/or process these measurements, and determine necessary
actions and/or movements by components and/or assemblies of an AI
device and shading system 100. In embodiments, computer-readable
instructions 140 executable by a processor 127 may communicate
commands, signals, and/or instructions to a shading element control
system (e.g., a motor controller, a motor and/or driving system) to
retract a shade element 103 due to high wind conditions. In
embodiments, for example, if a wind speed is higher than a
predetermined threshold, computer-readable instructions 140
executable by a processor 127 may communicate commands,
instructions, and/or signals to one or more motor controllers to
cause a shading element be retracted and moved to a rest
position.
[0044] In embodiments, an AI device or computing device 100 may
comprise one or more digital cameras or imaging devices and/or
analog imaging devices 126. In embodiments, one or more cameras 126
may comprise an optical system and/or an image generation system.
In embodiments, image devices 126 may display images and/or videos
on a screen immediately after being captured. In embodiments, one
or more image devices 126 may store and/or delete images, sound
and/or video from a memory associated with an imaging device 126.
In embodiments, one or more imaging devices 126 may capture, record
and/or moving videos with or without sound. In embodiments, one or
more imaging devices 126 may also incorporate computer-readable and
computer-executable instructions which, which when retrieved from a
non-volatile memory, loaded into a memory, and executed by a
processor, may crop and/or stitch pictures, and/or potentially
perform other image editing on captured images and/or video. For
example, image stitching or photo stitching is the process of
combining multiple photographic images with overlapping fields of
view to produce a segmented panorama and/or high-resolution image.
In embodiments, image stitching may be performed through the use of
computer software embodied within an imaging device 126. In
embodiments, an imaging device 126 may also internally perform
video stitching. In embodiments, other devices, components and/or
assemblies of imaging devices 126 or of an AI device housing 108
may perform image stitching, video stitching, cropping and/or other
photo editing. In embodiments, computer-readable instructions 140,
may be executable by a processor 127 in an AI device housing 108
may perform image stitching, video stitching, cropping and/or other
photo editing.
[0045] In embodiments, imaging devices 126 (e.g., digital cameras)
may capture images of an area around, surrounding, and/or adjacent
to AI devices with a shading system 100. In embodiments, an AI
device housing 108 may comprise one or more imaging devices 126
(e.g., cameras) mounted thereon or integrated therein. In
embodiments, a shading support 105 and/or a shade element 103 may
comprise one or more imaging devices 126 (e.g., cameras). In
embodiments, an AI device and shading system with more than one
imaging device 127 may allow image, video and/or sound capture for
up to 360 degrees of an area surrounding, around and/or adjacent to
an AI device or computing device and shading system 100. In
embodiments, computer-readable instructions 140 executable by a
processor 127 may stich and/or combine images and/or videos
captured by one or more imaging devices 126 to provide a panoramic
image of the area. The ability of having multiple imaging devices
to allows a benefit of panoramic image capture and not just an area
where an imaging device is initially oriented. In embodiments, one
or more imaging devices 126 may have one or more image capture
resolutions (e.g., 1 Megapixel (MP), 3 MP, 4 MP, 8 MP, 13 MP and/or
38 MP) that are selectable and/or adjustable. In embodiments, one
or more imaging devices may also be located on a top portion of a
shading element 103 and/or shading support 105 In embodiments, if
an imaging device 126 is located on a top portion of an AI device
with shading system 100 (e.g., a shading element 103 and/or shading
support 105), images, sounds and/or videos may be captured at a
higher level than ground level. In addition, an imaging device
located on a top portion of an AI device and shading system may
capture images, sounds, and/or videos of objects in a sky or just
of a horizon or sky. For example, in embodiments, an imaging device
126 located on a top portion may capture images of mountains and/or
buildings that are in a skyline. This may be beneficial in
situations where there is a fire in the mountain or an issue with a
building or someone wants to monitor certain aspects of a building
(e.g., if certain lights are on). Further, one or more imaging
devices 126 located on a top portion of an AI device with shading
system may capture images, sounds, and/or videos of a night time
sky (e.g., stars). In addition, one or more imaging device 126
located on a top portion of an AI device with shading system 100
may capture images, sounds, and/or videos of objects moving and/or
flying in the sky and/or horizon.
[0046] In embodiments, one or more imaging devices 126 may be
activated by messages, signals, instructions and commands. In
embodiments, components and/or assemblies of an AI device and
shading system 100 (e.g., a processor 127, computer-readable
instructions 140 executed by a processor 127, and/or a proximity
sensor 123) may communicate messages, signals, instructions and/or
commands to the one or more imaging devices 126 to activate, turn
on, change modes, turn off, change focus and/or change capture
image resolution. In addition, messages, signals, instructions,
and/or commands may activate one or more imaging devices 126 and
software stored therein may perform image stitching, video
stitching, image editing and/or cropping. In embodiments, a
processor 127 and/or wireless transceiver 130-132 in an AI device
with shading system 100 may communicate messages, signals,
instructions and/or commands to activate one or more imaging
devices in order to perform functions and/or features described
above (which may include security system functions). In
embodiments, a computing device, separate from an AI device with
shading system 100, may communicate messages, signals, instructions
and/or commands to activate one or more imaging devices in order to
perform functions and/or features described above.
[0047] In embodiments, one or more imaging devices 126 may
communicate captured images, sounds and/or videos to a processor
127 of an AI shading device and these images, sounds and/or videos
may be stored in one or more memories 128 of an AI shading device.
In embodiments, one or more imaging devices 126 may communicate
captured images, sounds and/or videos to a memory of a remote
computing device separate from a processor and/or controller 127 in
an AI shading device housing 108. In embodiments, for example, one
or more imaging devices 126 may communicate captured images, sounds
and/or videos to an external computing device (directly for storage
and/or streaming). In embodiments, one or more imaging devices 126
may communicate captured images, sounds, and/or videos utilizing
wired (e.g., utilizing Ethernet, USB, or similar protocols and
transceivers) and/or wireless communication protocols (e.g.,
utilizing 802.11 wireless communication protocols and
transceivers).
[0048] In embodiments, an AI device or computing device housing 108
may comprise one or more of imaging devices 126 and an infrared
detector. In embodiments, an infrared detector may comprise one or
infrared light sources and an infrared sensor. In embodiments, an
infrared detector may generate a signal indicating that an object
is located within an area being monitored or viewed by an infrared
detector. In embodiments, if an infrared detector generates a
signal indicating that an object (and/or individual) is present,
one or more imaging devices 126 may be activated and begin to
capture images and/or video, with or without sound, and communicate
captured images and/or video, with or without sound, to a separate
computing device and/or a processor 127. In embodiments, if an
infrared detector generates a signal indicating that an object
(and/or individual) is present, a lighting assembly (e.g., LED
lights) may also be activated and lights may be directed in an area
surrounding an AI device and shading system 100 and/or directly to
an area where an object is detected. In embodiments, one or more
imaging devices 126 and/or one or more lighting assemblies may be
activated, which results in better images and/or video of an area
surrounding an AI device and shading system 100. This is yet
another example of how an AI device and shading system provides
additional benefits of not only capturing images of its surrounding
area but also being utilized as a security device for an
environment in which an intelligent shading object is located.
[0049] In embodiments, an AI device or computing housing 108 may
comprise or more imaging devices 126 which may be thermal imaging
cameras. In embodiments, thermal imaging cameras may include a
special lens, an infrared light, and an array of infrared-detector
elements. In embodiments, an AI device and shading system 100 may
comprise an infrared light, a lens and a phased-array of
infrared-detector elements. In embodiments, a thermal imaging
camera comprises a special lens may focus on infrared light emitted
by all objects within an area surrounding and/or adjacent to an AI
device/computing device and shading system 100. In embodiments, a
focused light may be scanned by a phased array of infrared-detector
elements. In embodiments, one or more detector elements may
generate a very detailed temperature pattern, which may be referred
to as a thermogram. In embodiments, a detector array may take a
short amount of time (e.g., about one-thirtieth of a second) to
obtain temperature information to make a thermogram. In
embodiments, information may be obtained from a plurality of points
in a field of view of a detector array. In embodiments, detector
elements from a thermogram may be converted and/or translated into
electric impulses and electrical impulses may be sent to a
signal-processing unit. In embodiments, a signal-processing unit
may be a PCB with a dedicated chip that translates received
information (electrical impulses) into thermal images and/or
thermal video. In embodiments, a signal-processing unit may
communicate thermal images and/or thermal video either to a display
(e.g., a display and/or a display on a computing device
communicating with an AI device and shading system 100). In
embodiments, a signal-processing unit of a thermal imaging camera
may communicate thermal images and/or thermal video to a processor
for analysis, storage and/or retransmission to an external
computing devices. In embodiments, a thermal image may appear as
various colors depending on and/or corresponding to an intensity of
an infrared image. In embodiments, a thermal imaging camera allows
additional benefits of not having to activate a lighting assembly
in order to capture images and/or videos of an area surrounding an
AI device/computing device and system 100. In addition, by not
activating a lighting assembly, an intruder or moving object may
not be aware that an imaging device 126 may be capturing an image
or video of an area where an intruder or object is located. In
embodiments, an infrared detector may activate a thermal imaging
device upon detection of movement. In embodiments, a thermal
imaging device may activate on its own due to movement of an
intruder and/or object, or may be periodically or continuing
capturing images and/or video.
[0050] In embodiments, an AI device or computing device and shading
system 100 may comprise a proximity sensor 123. In embodiments, a
proximity sensor 123 may be able to detect a presence of nearby
objects, (e.g., people or other physical objects) without any
physical contact between a sensor and an object. In embodiments, a
proximity sensor 123 be located on and/or mounted on an AI device
housing 108. In embodiments, a proximity sensor 123 may be located
on and/or mounted on other printed circuit boards or may be a
standalone component. In embodiments, a proximity sensor 123 may be
located within and/or mounted on a shading support 105 and/or a
shading element 103. In embodiments, a proximity sensor 123 may
generate measurements and/or signals, which may be communicated to
a processor/controller 127. In embodiments, computer-readable
instructions 140, which are fetched from memory 128 and executed by
a processor 127, may perform and/or execute a proximity process or
method. In embodiments, for example, a proximity process may
comprise receiving measurements and/or signals from a proximity
sensor 123 indicating an object and/or person may be located in an
area where an AI device and shading system is deployed, going to be
deployed and/or extended, and/or towards where a component of an AI
device and shading system 100 may be moving. For example, if an
individual is located in an area where a shading support 105 may be
deployed and/or extended, a proximity sensor 123 may transmit a
signal or measurement indicating an object may be an obstruction to
movement of a shading support 105. In embodiments,
computer-readable instructions 140 executable by a processor 127
may receive and/or analyze a proximity measurement and determine an
object may be an obstacle. In embodiments, a proximity signal
and/or command may also identify a location of an object (e.g.,
obstacle) in relation to a proximity sensor 123 and/or some
reference location. In embodiments, computer-readable instructions
140 executable by a processor 127 may generate and/or communicate a
driving signal, command, and/or instruction that instructs an AI
device and shading system 100 not to deploy and/or open. In
embodiments, this may also work in the opposite direction, where if
a proximity sensor 123 does not determine that an object is within
an AI device and shading system area, then a proximity sensor
signal may not be communicated to the processor/controller 127.
[0051] In embodiments, a proximity sensor 127 may identify location
of a person relative to moving components of an AI device or
computing device and shading system 100. Utilization of proximity
sensors 127 on AI devices and shading system provides an advantage
over AI devices due to detection of objects, individuals, animals
and/or other devices. For example, based on proximity sensor
measurements, detections and/or values, an AI device and shading
system 100 may move a position of one or more assemblies or modules
(e.g., shading support, shading element, and/or other components)
to prevent problematic conditions or situations where objects
and/or individuals may damage components and/or assemblies of an AI
device and shading system. For example, based on proximity sensor
127 measurements or values, a shading element or shading support
may be retracted.
[0052] In embodiments, proximity sensors 123 may comprise one or
more laser sensors, light sensors, line of sight sensors,
ultrasound or ultrasonic sensors, infrared or other light spectrum
sensors, radiofrequency sensors, time of flight sensors, and/or
capacitive sensors. In embodiments, a proximity sensor 123 may emit
an electromagnetic field or a beam of electromagnetic radiation
(infrared, for instance), and may measure changes in a field
surrounding an object or measure changes in a return signal. In
embodiments, a laser sensor may comprise through-beam sensors,
retro-reflective sensors and/or diffuse reflection sensors. In
embodiments, a laser light returned may be measured against an
original signal to determine if an object and/or person is present.
In embodiments, laser light may consist of light waves of the same
wave length with a fixed phase ratio (coherence), which results in
laser systems having almost parallel light beam. Thus, movements
may be detected via small angles of divergence in returned laser
light. In embodiments, a light or photoelectric sensor may be
utilized as a proximity sensor 123 and may transmit one or more
light beams and may detect if any return reflected light signals
are present. In embodiments, a photoelectric sensor may be a
diffusion and/or retro-reflective and/or diffusion sensor. In
embodiments, diffusion sensor emitters and receivers may be located
in a same housing. In embodiments, a target may act as a reflector,
so that detection may occur if light s reflected off a disturbance
object. In embodiments, an emitter sends out a beam of light (most
often a pulsed infrared, visible red, or laser) that diffuses in
all directions, filling a detection area. In embodiments, a target
may enter an area and may deflects part of a beam back to a
receiver. In embodiments, a photoelectric sensor may detect a
target and an output signal may be turned on or off (depending upon
whether a photoelectric sensor is light-on or dark-on) when
sufficient light falls on a receiver of a photoelectric sensor.
[0053] In embodiments, a proximity sensor 123 may be an inductive
sensor which may detect movements in metallic and/or ferrous
objects. In embodiments, inductive sensors may detect ferrous
targets, for example, a metal (e.g., steel) thicker than one
millimeter. In embodiments, a proximity sensor 123 may be a
capacitive sensor. In embodiments, a capacitive sensor may detect
both metallic and/or non-metallic targets in powder, granulate,
liquid, and solid form. In embodiments, a proximity sensor 123 may
be an ultrasonic sensor. In embodiments, an ultrasonic diffuse
proximity sensor may employ a sonic transducer, which emits a
series of sonic pulses, then listens for their return from a
reflecting target. In embodiments, once a reflected signal is
received, sensor signals may be output to a control device. In
embodiments, an ultrasonic sensor may emit a series of sonic pulses
that bounce off fixed, opposing reflectors, which may be any flat
surface. In embodiments, sound waves may return to a sensor within
a user-adjusted time interval and if sound waves do not, an object
may be obstructing a ultrasonic sensing path and an ultrasonic
sensor may output signals accordingly. I embodiments, a proximity
sensor 123 may be a time of flight sensor. In embodiments, time of
flight optical sensors may determine displacement and distance by
measuring a time it takes a light to travel from an object
(intelligent shading system) to a target and back. In embodiments,
a time of flight sensor may be a time of flight camera, which is a
range imaging camera. In embodiments, a time-of-flight camera (ToF
camera) may resolves distance based on speed of light, by measuring
a time-of-flight of a light signal between a camera and a subject
and/or target for each point of an image.
[0054] In embodiments, an AI device or computing device housing 108
may comprise one or more directional sensors 122. In embodiments, a
directional sensor 122 may also comprise a GPS transceiver, a
compass, a magnetometer, a gyroscope and an accelerometer. In
embodiments, a shading support 105 and/or a shading element 103 may
comprise one or more directional sensors (e.g., GPS transceiver, a
compass, a gyroscope and an accelerometer). In embodiments,
directional sensors may provide orientations and/or locations of an
AI device and shading system 100 as well as different components of
an AI device and shading system 100. In embodiments,
computer-readable instructions 140 executable by a processor 127
may request an initial desired orientation for different assemblies
and/or components of an AI device and shading system and
communicate such directional request to one or more directional
sensors 122. In embodiments, one or more gyroscopes may be utilized
to determine, calculate and/or detect an angle of a support
assembly 105 with respect to an AI device housing 108 and/or detect
an angle of a support assembly 105 with respect to an shading
element 103 (e.g., determine a current elevation of different
assemblies of an AI device and shading system 100). In embodiments,
one or more accelerometers may also be utilized along with one or
more gyroscopes to determine, calculate and/or detect angles
discussed above.
[0055] In embodiments, computer-readable instructions 140 executed
by a processor 127 may communicate a directional request to one or
more directional sensors 122. In embodiments, one or more
directional sensors 122 (e.g., compass and/or magnetometer) may
determine movement and/or a relative position of an AI device with
shading system 100 (or other components or assemblies) with respect
from a reference direction. In embodiments, for example, a
directional measuring sensor 122 (e.g., compass, digital compass
and/or magnetometer) may determine relative movement and/or a
relative position with respect to true north. In embodiments, for
example, a compass and/or a digital compass may determine movement
and/or a relative position with respect to true north. In
embodiments, these measurements may be referred to as heading
measurements. In embodiments, a directional measuring sensor 122
may communicate and/or transfer heading measurements to a processor
127, where these heading measurements may be stored in a memory
128.
[0056] In embodiments, in response to a directional orientation
request by computer-readable instructions 140 executed by a
processor 127, a GPS transceiver may measure a geographic location
of an AI device and shading system 100 (and associated assemblies)
and may communicate such geographic location measurement to a
processor 127, which may transfer these heading measurements into a
memory 128. In embodiments, a GPS transceiver may determine
latitude and/or longitude coordinates and communicate such latitude
and/or longitude coordinates to a processor 127. In embodiments, a
clock may capture a time of day and communicate and/or transfer
such time measurement to a processor, which may store the time
measurement in a memory 128.
[0057] In embodiments, computer-readable instructions 140 executed
by a processor 127 stored in a memory 128 may include algorithms
and/or processes for determining and/or calculating a desired
azimuth and/or orientation of an AI device and shading system (and
associated assemblies) depending on a time of day. In an
alternative embodiment, a portable computing device executing
computer-readable instructions on a processor (e.g., a SMARTSHADE
software app) and located in a vicinity of an AI device and shading
housing 100 may retrieve coordinates utilizing a mobile computing
device's GPS transceiver and may retrieve a time from a mobile
computing device's processor clock and provide these geographic
location measurements and/or time to a processor 127 in an AI
shading housing 108.
[0058] In embodiments, computer-readable instructions 140 stored in
a memory 128 may be executed by processor 127 and may calculate a
desired AI device and shading system 100 (and associated assemblies
such as shading support 105 and/or shading element 103) angle
and/or azimuth angle utilizing received geographic location
measurements, heading measurements, and/or time measurements. In
embodiments, computer-readable instructions 140 stored in a memory
128 may compare 360 desired elevation angle measurements and
azimuth angle measurements to a current elevation angle and azimuth
angle of the AI device and shading system 100 (and associated
assemblies such as shading support 105 and/or shading element 103)
(calculated from gyroscope measurements, accelerometer
measurements, and/or both) to determine movements that a shading
support 105 and/or shading element 103 may make in order to move to
a desired orientation. In embodiments, executed computer-readable
instructions may calculate an azimuth adjustment measurement to
provide to an azimuth motor and/or an elevation adjustment
measurement to provide to a motor assembly.
[0059] In embodiments, an AI device or computing device housing 108
may comprise one or more microphones 129 to capture audio, and/or
audible or voice commands spoken by users and/or operators of
shading systems 100. In embodiments, computer-readable instructions
140 executed by one or more processors 127 may receive captured
sounds and create analog and/or digital audio files corresponding
to spoken audio commands (e.g., open shading system, rotate shading
system, elevate shading system, select music to play on shading
system, turn one lighting assemblies). In embodiments, an AI API
141 may communicate such generated audio files to an external AI
server 150. In embodiments, for example, an AI API 141 in an AI
shading device housing 108 may communicate generated audio files to
external AI servers 150 via and/or utilizing one or more PAN
transceivers 130, one or more wireless local rea network
transceivers 131, and/or one or more cellular transceivers 132. In
other words, communications with an external AI server 150 may
occur utilizing PAN transceivers 130 (and protocols).
Alternatively, or in combination with, communications with an
external AI server 150 may occur utilizing a local area network
(802.11 or WiFi) transceiver 131. Alternatively, or in combination
with, communications with an external AI server 150 may occur
utilizing a cellular transceiver 132 (e.g., utilizing 3G and/or 4G
or other cellular communication protocols). In embodiments, an AI
shading device housing 108 may utilize or comprise more than one
microphone 129 to allow capture of voice commands from a number of
locations and/or orientations with respect to an AI device and
shading system 100 (e.g., in front of, behind an AI device and
shading system, and/or at a 45 degree angle with respect to a
support assembly 105).
[0060] In embodiments, a mobile computing device 110 may
communicate with an AI Device or computing device and shading
system 100. In embodiments, a user and/or operator may communicate
with a mobile computing or communications device 110 by a spoken
command into a microphone of a mobile computing device 110. In
embodiments, a mobile computing or communications device 110
communicates a digital or analog audio file to a processor 127
and/or AI API 141 in an AI shading device housing 108 (e.g.,
utilizing one or more of transceivers (e.g., PAN transceiver 130;
wireless or WiFi transceiver 131 and/or cellular transceiver 132).
In embodiments, a mobile computing or communications device 110 may
also convert the audio file into a textual file for easier
conversion by either an AI API 141 or an AI engine in a an external
AI server or computing device 150. In embodiments, an AI engine may
also be resident within one or more memories 128 of an AI shading
device housing 108 (e.g., computer-readable instructions 140
executed by a processor 127)
[0061] FIG. 1 describes an AI device or computing device and
shading system 100 having a shading element or shade 103, shading
support 105 and/or an AI shading device housing 108. An AI shading
device housing 108 such as the one described above may be attached
to any shading system and may provide artificial intelligence
functionality and services for such shading systems. In
embodiments, a shading system may be an autonomous and/or automated
shading system having an integrated computing device, sensors and
other components and/or assemblies, but may benefit from having and
may have artificial intelligence functionality and services
provided utilizing an AI API and/or an AI engine stored in a memory
of an AI device or computing device housing.
[0062] In embodiments, an AI device or computing device housing may
comprise an audio transceiver 153 and/or a sound reproduction
device 152 (e.g., speaker). In embodiments, audio files (e.g.,
digital and/or analog digital files) may be communicated to an
audio transceiver 153 and further to a sound reproduction device
152 for audible reproduction. Thus, communications from an AI
engine (e.g., feedback commands and/or instructions) may be
communicated to a transceiver 153 and/or speaker for audible
feedback. In embodiments, music and/or audio files communicated
from an external server and/or from local memory may be
communicated to an audio transceiver 153 and/or speaker 152 for
reproduction to a user and/or operator.
[0063] FIG. 5 illustrates a block and dataflow diagram of
communications between an AI device or computing device and shading
system according to embodiments. An AI Device or computing device
and shading system 500 may communicate with an external AI server
575 and/or additional content servers 580 via wireless and/or wired
communications networks. In embodiments, a user may speak 591 a
command (e.g., turn on lights, or rotate shading system) which is
captured as an audio file and received at an AI device or computing
device and shading system 570. In embodiments, an AI API 541 in an
AI device and shading system 570 may communicate and/or transfer
592 an audio file (utilizing a transceiver--PAN, WiFi/802.11, or
cellular) to an external or third-party AI server 575. In
embodiments, an external AI server 575 may comprise a voice
recognition engine or module 585, a command engine module 586, a
third party content interface 587 and/or third party content
formatter 588. In embodiments, an external AI server 575 may
receive 592 one or more audio files and a voice recognition engine
or module 585 may convert received audio file to a device command
(e.g., shading system commands, computing device commands) and
communicate 593 device commands to a command engine module or
engine 586. In embodiments, if a voice command is for operation of
an AI device and shading system 570, a command engine or module 586
may communicate and/or transfer 594 a generated command, message,
and/or instruction to an AI device and shading system 570. In
embodiments, an AI device and shading system 570 may receive the
communicated command, communicate and/or transfer 595 the
communicated command to a controller/processor 571. In embodiments,
the controller/processor 571 may generate 596 a command, message,
signal and/or instruction to cause an assembly, component, system
or devices 572 to perform an action requested in the original voice
command (open or close shade element, turn on camera and/or
sensors, activate solar panels).
[0064] In embodiments, a user may request actions to be performed
utilizing a AI device or computing device and shading system's
microphones and/or transceivers that may require interfacing with
third party content servers (e.g., NEST, e-commerce site selling
sun care products, e-commerce site selling parts of AI devices and
shading systems, communicating with online digital music stores
(e.g., iTunes), home security servers, weather servers and/or
traffic servers). For example, in embodiments, an AI device or
computing device and shading system user may request 1) traffic
conditions from a third party traffic server; 2) playing of a
playlist from a user's digital music store accounts; 3) ordering a
replacement skin and/or spokes/blades arms for a shading system. In
these embodiments, additional elements and steps may be added to
previously described method and/or process.
[0065] For example, in embodiments, a user may speak 591 a command
or desired action (execute playlist, order replacement
spokes/blades, and/or obtain traffic conditions from a traffic
server) which is captured as an audio file and received at an AI
API 541 stored in one or more memories of an AI device or computing
device housing 570. As discussed above, in embodiments, an AI API
541 may communicate and/or transfer 592 an audio file utilizing a
shading system's transceiver to an external AI server 575. In
embodiments, an external AI server 575 may receive one or more
audio files and a voice recognition engine or module 585 may
convert 593 received audio file to a query request (e.g., traffic
condition request, e-commerce order, retrieve and stream digital
music playlist).
[0066] In embodiments, an external AI server 575 may communicate
and/or transfer 597 a query request to a third party server (e.g.,
traffic conditions server (e.g., SIGALERT or Maze), an e-commerce
server (e.g., a RITE-AID or SHADECRAFT SERVER, or Apple iTunes
SERVER) to obtain third party goods and/or services. In
embodiments, a third party content server 580 (a communication and
query engine or module 581) may retrieve 598 services from a
database 582. In embodiments, a third party content server 580 may
communicate services queried by the user (e.g., traffic conditions
or digital music files to be streamed) 599 to an external AI server
575. In embodiments, a third party content server 580 may order
requested goods for a user and then retrieve and communicate 599 a
transaction status to an external AI server 575. In embodiments, a
content communication module 587 may receive communicated services
(e.g., traffic conditions or streamed digital music files) or
transaction status updates (e.g., e-commerce receipts) and may
communicate 601 the requested services (e.g., traffic conditions or
streamed digital music files) or the transaction status updates to
an AI device or computing device and shading system 570. Traffic
services may be converted to an audio signal, and an audio signal
may be reproduced utilizing an audio system 583. Digital music
files may be communicated and/or streamed directed to an audio
system 583 because there is no conversion necessary. E-commerce
receipts may be converted and communicated to speaker 583 for
reading aloud. E-commerce receipts may also be transferred to
computing device in an AI device or computing device and shading
system 570 for storage and utilization later.
[0067] In embodiments, computer-readable instructions in a memory
module of a an AI device or computing device and shading system 570
may be executed by a processor and may comprise a voice recognition
module or engine 542 and in this embodiment, voice recognition may
be performed at an AI device or computing device and shading system
570 without utilizing a cloud-based server. In embodiments, an AI
device and shading system 570 may receive 603 the communicated
command, communicate and/or transfer 604 the communicated command
to a controller/processor 571. In embodiments, the
controller/processor 571 may generate and/or communicate 596 a
command, message, signal and/or instruction to cause an assembly,
component, system or device 572 to perform an action requested in
the original voice command.
[0068] FIG. 2 illustrates an apparatus including an AI device or
computing device and shading system with an adjustable shading
supports according to embodiments. In embodiments, an AI and
shading system 200 comprises a shading element or a plurality of
shading elements 203, one or more shading supports 205 and/or an AI
device housing 208. In embodiments, an AI device or computing
device housing 208 may comprise an upper body 212 and a base
assembly 211. In embodiments, an AI device housing 208 may comprise
a microphone and/or LED array 215. In embodiments, an AI device or
computing device housing 208 may comprise one or more processors
227, one or more PAN transceivers 230, one or more WiFi or 802.11
transceivers 231, and/or one or more cellular transceivers 232 (the
operations of which are described above with respect to FIG. 1). In
addition, an AI device housing 208 (and/or an AI device and shading
system 200) may also include sensors (similar to directional
sensors 122, environmental sensors 121 and/or proximity sensors 123
of FIG. 1), an audio receiver and speaker, a computing device
although these components and/or assemblies are not shown or
illustrated in FIG. 2.
[0069] In embodiments, an AI device housing 208 may comprise one or
more audio transceivers 243 and one or more speakers 229. In
embodiments, audio files, music files, and/or voice files may be
communicated to one or more audio transceivers 243 and/or one or
more speakers 229 for audio playback. In embodiments, one or more
speakers 229 may be a speaker line array where speakers are located
at least on each side of an AI device housing to provide sound
coverage on each of an AI device or computing device housing 208
according to embodiments.
[0070] In embodiments, a microphone and/or LED array 215 may
provide sound capture and/or lighting on each side or a number of
sides of an AI device housing. In embodiments, as is illustrated in
FIG. 2, a microphone and/or LED array may be positioned above a
base assembly 211 of an AI device housing. FIG. 4 illustrates a
microphone and/or LED array in an AI device housing or computing
device housing according to embodiments. In embodiments, a
microphone and/or LED array 400 may comprise a plastic housing 405,
one or more flexible printed circuit boards (PCBs) or circuit
assemblies 410, one or more LEDs or LED arrays 415 and/or one or
more microphones and/or microphone arrays 420. In embodiments, a
plastic housing 405 may be oval or circular in shape. In
embodiments, a plastic housing 405 may be fitted around a shaft, a
post and/or tube in an AI device housing 208. In embodiments, a
plastic housing 405 may be adhered to, connected to and/or fastened
to a shaft, a post and/or tube. In embodiments, a flexible PCB or
housing 410 may be utilized to mount and/or connect electrical
components and/or assemblies such as LEDs 415 and/or microphones
420. In embodiments, a flexible PCB or housing 410 may be mounted,
adhered or connected to a plastic housing or ring 405. In
embodiments, a flexible PCB or housing 410 may be mounted, adhered
or connected to an outer surface of a plastic housing or ring 405.
In embodiments, a plastic housing or ring 405 may have one or more
waterproof openings 425 for venting heat from one or more
microphone arrays 420 and/or one or more LED arrays 415. In
embodiments, a plastic housing or ring 405 may have one or more
waterproof openings for keeping water away and/or protecting one or
more microphone arrays 420 and/or one or more LED arrays 415 from
moisture and/or water. In embodiments, one or LED arrays 415 may be
mounted and/or connected on an outer surface of a flexible PCB
strip 410 and may be positioned at various locations on the
flexible PCB 410 to provide lighting in areas surrounding a shading
and AI system. In embodiments, one or more LED arrays may be spaced
at uniform distances around a plastic housing 405 (e.g., or ring
housing). In embodiments, one or more microphones or microphone
arrays 420 may be mounted and/or connected to a flexible PCB strip
410. In embodiments, one or more microphones or microphone arrays
420 may be positioned at one or more locations around a housing or
ring 405 to be able capture audible sound and/or voice commands
coming from a variety of directions. In embodiments, one or more
microphones or microphone arrays 420 may be spaced at set and/or
uniform distances around a housing and/or ring 405.
[0071] Referring back to FIG. 2, in embodiments, a base assembly
211 may be stationary and an AI device or computing device housing
or body 212 may rotate about a base assembly 211. FIG. 6
illustrates a block diagram of components and assemblies for
rotating an AI device housing or computing device housing body
about a base assembly. In embodiments, as illustrated in FIG. 6, a
base assembly 211 may comprise a motor 610, a motor controller 611,
a shaft or driving assembly 612 and/or a gearing assembly 613. In
embodiments, an AI device or computing device housing body 212 may
comprise a gearing assembly 614 and/or a connector 615. In
embodiments, in response to a command and/or instruction being
received by a motor controller 611, a motor controller 611 may
communicate a command and/or signal to a motor 610. In response, a
motor 610 may be activated and may cause rotation of a shaft or
driving assembly 612, which is connected to a gearing assembly 613.
In embodiments, rotation of a shaft or driving assembly 612 may
cause rotation of a gearing assembly 613. In embodiments, a gearing
assembly 613 in a base assembly 211 may cause rotation of a gearing
assembly 614 in an AI device housing body 212, which is connected
and/or couple to a connector or plate 615 in an AI device or
computing device housing 208. In embodiments, rotation of a gearing
assembly 614 and/or a connector or plate 615 may cause rotation of
the AI device or computing device housing 208 about a base assembly
211. This provides an advantage over other prior art devices
because the AI device or computing device housing 208 may move to
follow and/or track a sun and thus the shading element or shade 203
may be able to provide protection from the sun and/or heat by
moving and/or tracking the sun. Although FIG. 6 illustrates that a
motor controller 611, a motor 610, a driving assembly or shaft 612
and/or a gearing assembly 613 in a base assembly 211 and a gearing
assembly 614 and/or a connector or plate 615 in a AI device body
212, any of the components may be placed in or be resident in the
other assembly (e.g., different components (e.g., gearing assembly
614 and/or a connector or plate 615) may be placed and/or
positioned in a base assembly 211 and other components (e.g., motor
controller 611, a motor 610, a driving assembly or shaft 612 and/or
a gearing assembly 613) may be placed and/or positioned in a AI
device body 212. In either configuration, an AI device or computing
device body 212 may rotate about a base assembly 211, and this may
provide additional flexibility in providing protection from the sun
and other environmental conditions for the AI device body 212. In
embodiments, the description above and components and assemblies
utilized in FIGS. 2 and 6 which allow rotation of a base assembly
211 with respect to an AI device housing 208 may also be utilized
in a rotation assembly in FIG. 1 (rotation of a support assembly or
shading support 105 to an AI or computing device housing 108). In
addition, the rotation of a base assembly 211 with respect to an AI
device or computing device housing 208 may also be utilized in the
AI device housings 108 (in FIG. 1) and 308 (in FIG. 3). In other
words, each of the devices described in FIGS. 1 and 3 may have the
ability to rotate about a base assembly.
[0072] Referring back to FIG. 2, a shading support 205 may comprise
one or more support arms. For example, as illustrated in FIG. 2,
two support arms may be utilized to connect a shading element or
shade 203 to an AI device or computin device housing 208 (although
one, three, four, five or six support arms may also be utilized).
In embodiments, a motor assembly may cause one or support arms 205
to move to different positions to protect an AI device or computing
device housing 208 from heat, sun, rain, hail, snow and/or other
environmental elements. In embodiments, movement or one or more
shading supports (or support arms) may tilt a shade element or
shade 203 towards a sun, such as illustrated by reference number
291 in FIG. 2. In embodiments, a motor controller may receive
commands, instructions, messages or signals requesting movement of
a shading element or shade 203 and may generate commands and/or
signals to cause a motor to turn, a shaft to rotate, and/or a
gearing assembly to turn. In embodiments, a gearing assembly may be
attached to a shading support 203 and may cause movement of one or
more shading supports 205 which in turn moves and/or rotates a
shading element or shade 203. In embodiments, a shading element or
shade 203 may be expandable. In embodiments, a shading element or
shade 203 may one length and/or width in one position (e.g., a rest
position) and may expand and have a larger length and/or width in
other positions (e.g., when deployed and protecting an AI device
housing 208 from a weather or other environmental conditions). This
may be referred to as expanding shade.
[0073] FIG. 3 illustrates an apparatus including AI device or
computing device and shading system with a hinging support assembly
according to embodiments. The operation of components
(transceivers, cameras, sensors, processors, and/or
computer-readable instructions in AI device housing 308 is similar
to that as described above with respect to FIGS. 1 and 2. FIG. 3's
AI device/computing device and shading system 300 comprises a two
hinge shading support 305. In embodiments, a two hinge shading
support 305 may comprise a first shading support 391, a hinging
assembly 392 and a second shading support 393. In embodiments, a
first shading support 391 may rotate with respect to an AI device
housing 308 as is illustrated by reference number 394 (and thus the
shading element or shade 303, the hinging assembly 392 and the
second shading support 393 may also rotate with respect to the AI
device housing). In embodiments, a second shading support 392 may
rotate about a first shading support 319 utilizing a hinging
assembly 392. In embodiments, a rotation of a second shading
support 392 about a first shading support using a hinging assembly
392 is illustrated by reference number 395. In embodiments, the
description above and the components and assemblies described
therein may be utilized in the hinging assembly 252 illustrated in
FIG. 2. In other words, similar components may be utilized in the
hinging assembly 252 of FIG. 2.
[0074] In embodiments, a first motor assembly comprises a first
motor shaft that may rotate in response to activation and/or
utilization of a first motor. In embodiments, a first motor shaft
may be mechanically coupled (e.g., a gearing system, a
friction-based system, etc.) to a force transfer shaft. In
embodiments, a first motor shaft may rotate in a clockwise and/or
counterclockwise direction and in response, a force transfer shaft
may rotate in a same and/or opposite direction. In embodiments, a
force transfer shaft may pass may be mechanically coupled to a
receptacle in an AI device housing. In response to, or due to,
rotation of force transfer shaft in a receptacle in an AI device or
computing device housing 308, a first support assembly 391 (and
thus a shade element or shade 303 plus a hinging assembly 392 and a
second support assembly 393) may rotate with respect to the AI
device or computing device housing 308. In embodiments, a first
motor may be coupled to a gearbox assembly. In embodiments, a
gearbox assembly may comprise a planetary gearbox assembly. A
planetary gearbox assembly may be comprise a central sun gear, a
planet carrier with one or more planet gears and an annulus (or
outer ring). In embodiments, planet gears may mesh with a sun gear
while outer rings teeth may mesh with planet gears. In embodiments,
a planetary gearbox assembly may comprise a sun gear as an input,
an annulus as an output and a planet carrier (one or more planet
gears) remaining stationary. In embodiments, an input shaft may
rotate a sun gear, planet gears may rotate on their own axes, and
may simultaneously apply a torque to a rotating planet carrier that
applies torque to an output shaft (which in this case is the
annulus). In embodiments, a planetary gearbox assembly and a first
motor may be connected and/or adhered to a first support assembly
391 although resident within the AI device housing. In embodiments,
a motor and gearbox assembly may be resident within an AI device or
computing device housing 208. In embodiments, an output shaft from
a gearbox assembly may be connected to an AI device housing (e.g.,
an opening of an AI) and/or a first support assembly 391. In
embodiments, because an AI device or computing device housing 308
is stationary, torque on an output shaft of a gearbox assembly may
be initiated by a first motor to cause a first support assembly 391
(and thus a shade element or shade 303) to rotate. In embodiments,
other gearbox assemblies and/or hinging assemblies may also be
utilized to utilize an output of a motor to cause a first support
assembly 391 (and hence a shade element or shade 303) to rotate
with respect to an AI or computing device housing 308. In
embodiments, a first motor may comprise a pneumatic motor, a servo
motor and/or a stepper motor. Although, the rotation of a support
assembly with respect to a computing or AI device housing is
described above with respect to FIG. 3, similar or the same
components and assemblies (e.g., gearbox assemblies described
above) may be present in the device of FIG. 1 to allow rotation of
a support assembly 105 with respect to an AI or computing device
housing 108.
[0075] In embodiments, a first support assembly 391 may be coupled
and/or connected to a second support assembly 393 via a hinging
assembly. In embodiments, a shading support 305 may comprise a
first support assembly 391, a second gearbox assembly (or a linear
actuator or hinging assembly) 392, a second support assembly 393, a
second motor, and/or a second motor controller. In embodiments, a
second motor assembly may comprise a second motor controller and a
second motor, and maybe a second gearbox assembly or linear
actuator. In embodiments, a shading support 305 may also comprise a
motor control which may have a second motor controller mounted
and/or installed thereon. In embodiments, a second support assembly
393 may be coupled or connected to a first support assembly 391 via
a hinging assembly 392 (e.g., a second gearbox assembly). In
embodiments, a second gearbox assembly and a second motor connected
thereto, may be connected to a first support assembly 391. In
embodiments, an output shaft of a second gearbox assembly may be
connected to a second support assembly 393. In embodiments, as a
second motor operates and/or rotates, a second gearbox assembly
rotates an output shaft which causes a second support assembly 393
to rotate (either upwards or downwards) at a right angle from, or
with respect to, a first support assembly 391. In embodiments
utilizing a linear actuator as a hinging assembly 392, a steel rod
may be coupled to a second support assembly 393 and/or a first
support assembly 391 which causes a free hinging between a second
support assembly 391 and a first support assembly 391. In
embodiments, a linear actuator may be coupled, connected, and/or
attached to a second support assembly 393 and/or a first support
assembly 391. In embodiments, as a second motor operates and/or
rotates a steel rod, a second support assembly 393 moves in an
upward or downward direction with respect to a hinged connection
(or hinging assembly) 392.
[0076] In embodiments, a first support assembly 391 may comprise an
elevation motor, an elevation motor shaft, a worm gear, and/or a
speed reducing gear. In embodiments, a speed reducing gear may be
connected with a connector to a connection plate. In embodiments, a
first support assembly 391 may be mechanically coupled to a second
support assembly 393 via a connection plate. In embodiments, a
connection plate may be connected to a second support assembly 393
via a connector and/or fastener. In embodiments, an elevation motor
may cause rotation (e.g., clockwise or counterclockwise) of an
elevation motor shaft, which may be mechanically coupled to a worm
gear. In embodiments, rotation of an elevation motor shaft may
cause rotation (e.g., clockwise or counterclockwise) of a worm
gear. In embodiments, a worm gear may be mechanically coupled to a
speed reducing gear. In embodiments, rotation of a worm gear may
cause rotation of a speed reducing gear via engagement of channels
of a worm gear with teeth of a speed reducing gear. In embodiments,
a sped reducing gear may be mechanically coupled to a connection
plate to a second support assembly via a fastener or connector. In
embodiments, rotation of a speed reducing gear may cause a
connection plate (and/or a second support assembly 393) to rotate
with respect to a first support assembly 391 in a clockwise or
counterclockwise direction as is illustrated by reference number
395. In embodiments, a second support assembly 393 may rotate with
respect to a first support assembly 391 approximately 90 degrees
via movement of the connection plate. In embodiments, a second
support assembly 393 may rotate approximately 0 to 30 degrees with
respect to a first support assembly 391 via movement of the
connection plate.
[0077] FIG. 7A illustrates an AI Device with Shading System with a
movable base assembly according to embodiments. In embodiments, an
AI device and shading system intelligent shading system 700 may
comprise a movable base assembly 711, an AI device or computing
device housing 708, a support assembly 730 and/or a shading element
or shade 740. In embodiments, a movable base assembly 711 may be
integrated as part of an AI device or computing device housing 708.
In embodiments, as described in FIGS. 2 and 6, an AI device or
computing device housing 708 may rotate about a movable base
assembly 711. In embodiments, a movable base assembly 711 may
comprise a base motor controller 715, a base motor 716, a drive
assembly 717 and/or one or more wheels (or base driving assemblies)
718. In embodiments, a base assembly or movable base assembly 710
may comprise one or more environmental sensors 721 and/or one or
more directional sensors 722. In embodiments, a base assembly 710
may also comprise one or more proximity sensors 719. In
embodiments, a base assembly or movable 710 may comprise one or
more processor or controllers 711, one or more memory modules or
memories 712 and/or computer readable instructions 713, where the
computer-readable instructions are fetched, read and/or accessed
from the one or more memory modules or memories 712 and executed by
the one or more processor or controllers 711 to perform a number of
functions. In embodiments, a base assembly or movable base assembly
710 may comprise one or more separate wireless transceivers 714. In
embodiments, a base assembly or movable base assembly 710 may
comprise one or more cameras 726. In embodiments, the one or more
cameras 726, one or more wireless transceivers 714, one or more
memory modules or memories 712, one or more proximity sensors 719,
one or more direction sensors 722 and/or one or more environmental
sensors 721 may be in addition to similar or same devices located
on the AI device housing 708, the support assembly 205 and/or shade
or shading element 203. In embodiments, operation and/or
utilization of these sensors and/or devices are similar to that
described with respect to FIGS. 1, 2 and 5.
[0078] In embodiments, a base assembly or movable base assembly 710
may move around a surface (e.g., a ground surface, a floor, a
patio, a deck, and/or outdoor surface) based at least part on
environmental conditions. In embodiments, a base assembly or
movable base assembly 710 may move based on pre-programmed settings
or instructions stored in one or more memories 712 of a base
assembly 710 or memory 228 of an AI device or computing device
housing 208. In embodiments, a base assembly or movable base
assembly 710 may move based on pre-programmed settings or
instructions stored in one or more memories 228 of an AI device or
computing device housing 708 and/or one or more memories 712 of a
base assembly 710. In embodiments, a base assembly 710 may move
around a surface in response to commands, instructions, messages or
signals communicated from portable computing devices (e.g., mobile
phone, smart phone, laptops, mobile communication devices, mobile
computing devices and/or tablets). In embodiments, a base assembly
or a movable base assembly 710 may move around a surface in
response to voice commands. In embodiments, for example, a base
assembly or movable base assembly 710 may move to track
environmental conditions (e.g., the sun, wind conditions, humidity
conditions, temperature conditions) and/or may move in response to
an individual's commands. In embodiments, a base assembly or
movable base assembly 710 may move around a surface based at least
in part (or in response to) sensor readings. In embodiments, a base
assembly 710 may move around a surface based at least in part on
images captured and received by cameras located on a base assembly
710, a shading system 700, and/or a portable computing device
and/or a server (or computing device) 729.
[0079] In embodiments, computer-readable instructions 713 stored in
one or more memories 712 of a base assembly or movable base
assembly 710 may be executed by one or more processors 711 and may
cause movement of the base assembly based on or according to
pre-specified conditions and/or pre-programmed instructions. In
embodiments, for example, a base assembly 710 of an AI Device and
Shading System 700 may move to specified coordinates at a specific
time based on the stored computer-readable instructions 713 stored
in one or more memories 712. For example, a base assembly 710 may
move 10 feet to the east and 15 feet to the north at 8:00 am based
on stored computer-readable instructions 713. Similarly, for
example, a base assembly 710 may move to specified coordinates or
locations at a specific time based on computer-readable
instructions 240 stored in one or more memories 228 of an AI Device
or computing device housing 708.
[0080] In embodiments, for example, a base assembly 710 may move to
specified coordinates and/or location based upon other conditions
(e.g., specific days, temperatures, humidity, latitude and
longitude, and other devices being in proximity) that may match
conditions or be predicted on conditions stored in the
computer-readable instructions 713 stored in the one or more
memories 712 of a base assembly. For example, a base assembly 710
may move if it is 9:00 pm and/or if it is a Saturday. Similarly,
the computer-readable instructions 240 may be stored in one or more
memories 228 of an AI Device or computing device housing and
instructions, commands and/or messages may be communicated to a
motor controller 715 in a movable base assembly 710.
[0081] In embodiments, a motor controller and/or a processor 227 in
a AI device or computing device housing 708 may communicate
instructions, commands, signals and/or messages related to or
corresponding to base assembly 710 movement directly to a base
motor controller 715 and/or indirectly through a processor or
controller 711 to a base motor controller 715. For example, a motor
controller and/or processor 227 in an AI device or computing device
housing may communicate instructions and/or messages to a base
motor controller 715 which may result in a base assembly 710 moving
20 feet sideways. In embodiments, communication may pass through a
transceiver 714 to a base motor controller 715. In embodiments,
communications may pass through a base assembly controller or
processor 711 to a base motor controller 715. In embodiments,
computer-readable instructions stored on one or more memory modules
or memories 228 of an AI device or computing device housing 208,
may cause a processor 227 in an AI device or computing device
housing 208 to receive one or more measurements from one or more
sensors (including wind, temperature, humidity, air quality,
directional sensors (GPS and/or digital compass) in an AI device or
computing device housing 208, one or more shading supports 205,
and/or one or more shading elements 203; analyze the one or more
received measurements; generate commands, instructions, signals
and/or messages; and communicate such commands, instructions,
signals and/or messages to a base assembly 710 to cause a base
assembly 710 to move. For example, based on wind sensor or
temperature sensor measurements, computer-readable instructions
executed by a processor 227 of an AI device or computing device
housing 708 may communicate messages to a base motor controller 715
in a base assembly 710 to cause the base assembly 710 to move away
from a detected wind direction and/or condition. For example, based
on received solar power measurements (from one or more solar panel
assemblies) and/or a directional sensor reading (e.g., a digital
compass reading or GPS reading), a processor 227 executing
computer-readable instructions in an AI device housing 208 may
communicate messages and/or instructions to a base motor controller
715 to cause a base assembly 710 to automatically move in a
direction where solar panels may capture more solar power. This
provides an AI device with shading system with an advantage because
not only can an AI device with a shading system rotate towards a
light source (e.g., via a motor assembly in an AI Device or
computing device Housing 208), an entire AI device with shading
system also has an ability to move to an area where no obstacles or
impediments are present, or where no unfavorable conditions are
present because the base assembly 710 is movable from one location
to another.
[0082] In embodiments, a portable or mobile computing device 723
(e.g., smart phone, mobile communications device, a laptop, and/or
a tablet) and/or a computing device 729 may transmit commands,
instructions, messages and/or signals to a base assembly 710
identifying desired movements of a base assembly 710. In
embodiments, a portable or mobile computing device 723 and/or a
computing device 729 may comprise computer-readable instructions
stored in a memory of a portable computing device 723 or computing
device 729 and executed by a processor (e.g., SMARTSHADE software)
that communicates with an AI Device with Shading System 700 as is
described supra herein. In embodiments, computer-readable
instructions executed by a processor of a mobile computing device
723 may be part of a client-server software application that also
has computer-readable instructions stored on a server and executed
by a processor of a server (e.g., computing device 729). In
embodiments, computer-readable instructions executed by a processor
of a mobile computing device 723 may be part of a client-server
software application that also has computer-readable 240
instructions stored on a memory 228 and executed by a processor 227
of an AI device housing 208 of an AI device and shading system 200
or 700. In other words, not all of the computer-readable
instructions may be stored on a mobile computing device 723. In
embodiments, a computer-readable instructions executed by a
processor of a mobile computing device 723 may communicate
instructions, commands and/or messages directly to a base assembly
710 via a wireless transceiver (e.g., a wireless transceiver 724 on
a mobile computing device 723 may communicate commands and/or
messages to a transceiver 714 on a base assembly 710).
[0083] In embodiments, voice commands may be converted on a mobile
computing device 723 and instructions and/or messages based at
least in part on the voice commands may be transmitted (e.g., via a
wireless transceiver 724) to a base assembly motor controller 715
directly (e.g., through a wireless transceiver 714), or indirectly
via a wireless transceiver 714 and/or a base assembly processor 711
to automatically move a base assembly 710 in a specified direction
and/or distance or to specified coordinates. In embodiments, a
mobile computing device 723 may communicate instructions, messages
and/or signals corresponding to voice commands and/or audio files
to a base assembly motor controller 715 directly, or indirectly as
described above. In embodiments, where audio files are received,
computer-readable instructions 713 stored in a base assembly memory
712 may be executed by a base assembly processor 711 to convert the
voice commands into instructions, signals and/or messages
recognizable by a base assembly motor controller 715. Similarly, if
audio files are received by a processor 227 in an AI Device housing
208, computer-readable instructions 240 stored in a memory 228 may
be executed by an AI device housing processor 227 to convert voice
commands into instructions, signals and/or messages recognizable by
a base assembly motor controller 715. In embodiments,
computer-readable instructions executed by a processor on a mobile
computing device 723 may present a graphical representation of a
base assembly 710 on a mobile computing device display. In
embodiments, a mobile computing device 723 may receive commands via
a user interface from a user representing directions and/or
distance to move a base assembly (e.g., a user may select a graphic
representation of a base assembly on a display of a mobile
computing device and indicate that it should move to a left or east
direction approximately 15 feet) and computer-readable instructions
executed by a processor a mobile computing device 723 may
communicate commands, instructions and/or messages representative
of a base assembly movement directions and/or distance directly
and/or indirectly to a base assembly motor controller 715 to cause
movement of a base assembly 710 in the selected direction and/or
distance. Similarly, the mobile computing device 723 may
communicate commands, instructions and/or messages to a processor
in an AI device housing 208, which in turn will communicate
commands, instructions and/or messages to a to a base assembly
motor controller 715 to cause movement of a base assembly 710 in
the selected direction and/or distance. This feature may provide an
advantage of independently moving a base assembly 710 (and thus an
AI device or computing device and shading system) from a remote
location without having to be next to or in proximity to a base
assembly.
[0084] In embodiments, a transceiver 714 and/or a transceiver may
be a WiFi (e.g, an 802.11 transceiver), a cellular transceiver,
and/or a personal area network transceiver (e.g., Bluetooth, Zigbee
transceiver) so that a mobile computing device 723 (and its
wireless transceiver 724) may communicate with a base assembly 710
via a number of ways and/or protocols. In embodiments, a mobile
computing device 723 may utilize an external server (e.g., a
computing device or server computer 729) to communicate with a base
assembly 710.
[0085] FIG. 7B is a flowchart illustrating base assembly movement
according to voice commands according to embodiments. In
embodiments, a base assembly 710 may move in response to voice
commands. In embodiments, voice-recognition software (e.g.,
computer-readable instructions) may be stored in a memory 712 of a
base assembly and executed by a base assembly processor 711 to
convert 771 actual voice commands (spoken by an operator) or
received voice audio files into messages, instructions and/or
signals which can then be communicated 772 to a base motor
controller 715. In embodiments, a base motor controller 715 may
generate commands or messages and communicate commands or messages
773 a base assembly 710 to move in a direction and/or distance
based at least in part on received voice commands and/or audio
files. In embodiment, a voice recognition application programming
interface (API) may be stored in a memory 712 of a base assembly
710. In embodiments, a voice recognition API may be executed by a
processor 711 and voice commands and/or voice audio files from a
base assembly may be communicated 774 to an external server (e.g.,
via a wireless transceiver 714) or other network interface. In
embodiments, voice recognition software may be present or installed
on an external server (e.g., computing device 729) and may process
775 the received voice commands and/or voice audio files and
convert the processed voice files into instructions and/or
messages, which may then be communicated 776 back to a base
assembly 710. In embodiments, the communicated instructions,
commands and/or messages from an external voice recognition server
(e.g., computing device 729) may be received at a base assembly 710
and transferred and/or communicated (e.g., via a transceiver 714
and/or a processor 711) 777 to a base motor controller 715 to cause
a base assembly 710 to move directions and/or distances based at
least in part on the received voice commands. Similarly, voice
recognition of received voice commands and/or audio files, as
discussed above, may be performed at an AI device or computing
device housing 208 (e.g., utilizing computer-readable instructions
240 stored in memories 228) and/or at a mobile computing device 723
(e.g., utilizing computer-readable instructions stored in memories
of a mobile computing device 723) or combination thereof, and
converted instructions, commands and/or messages may be
communicated to a base motor controller 715 to cause movement of a
base assembly in specified directions and/or distances. The ability
of a base assembly 710 to move in response to voice commands allows
an advantage of a shading system to move quickly (and be
communicated with via a variety of interfaces) with specific and
customizable instructions without having a user physically exert
themselves to move an umbrella and/or shading system to a proper
and/or desired position.
[0086] FIG. 7C illustrates movement of a base assembly according to
sensor measurements according to embodiments. In embodiments, a
base assembly 710 may comprise one or more sensors (e.g.,
environmental sensors 721 (wind, temperature, humidity and/or air
quality sensors); direction sensors 722 (e.g., compass and/or GPS
sensors); and/or proximity sensors 719. In embodiments, in addition
or as an alternative, an AI device or computing device housing 208
may comprise one or more environmental sensors, directional sensors
and/or proximity sensors mounted thereon and/or installed therein.
In embodiments, in addition or as an alternative, an external
hardware device (e.g., a portable computing device 723) or other
computing devices (e.g., that are part of home security and/or
office building computing systems or computing device 729) may
comprise directional sensors, proximity sensors, and/or
environmental sensors that communicate with an AI device or
computing device and shading system 700 and/or a base assembly 710.
In embodiments, sensors 722 may be located within a base assembly
710 may capture 781 measurements of environmental conditions and/or
location information adjacent to and/or surrounding the base
assembly 710. In embodiments, one or more sensors 722 may
communicate 782 sensor measurements to a processor and/or
controller 711. In embodiments, computer-readable instructions 713
stored in a memory 712 of a base assembly may be executed by a
processor and/or controller 711 and may analyze 783 sensor
measurements. In embodiments, based on the analyzation of sensor
measurements, computer-readable instructions 713 may generate 784
movement direction values and distance values and/or instructions
for a base assembly 710. In embodiments, computer-readable
instructions executed by a processor 711 may communicate 785 the
generated direction values and/or distance values and/or
instructions to a base assembly motor controller 715, which
generates messages, commands, and/or signals to cause 786 a drive
assembly (e.g., a motor, shaft and/or wheels or a motor, shaft
and/or treads) to move a base assembly 710 based at least in part
on the generated direction values and/or distance values and/or
instructions.
[0087] In embodiments, environmental sensors and/or directional
sensors may be located on an AI device or computing device housing
208, external hardware devices (e.g., portable computing device
723) and/or external computing devices (e.g., computing device or
server 729). In embodiments, intelligent shading system sensors and
external device sensors may capture 787 environmental measurements
(e.g., wind, temperature, humidity, air quality) and/or location
measurements (e.g., latitude and/or longitude; headings, altitudes,
etc.) and may communicate captured measurements or values to
processors and/or controllers in respective devices (e.g., AI
device or computing device housing 208, portable computing device
723 or external computing devices 729). In embodiments,
computer-readable instructions executed by processors and/or
controllers of an AI device housing 208, portable computing device
723 and/or external computing device 729 may analyze sensor
measurements and generate movement values or instructions (e.g.,
direction values and/or distance values) and/or may communicate
sensor measurements (or generated movement values or instructions)
788 to a base assembly 710 utilizing transceivers in intelligent
shading systems, portable computing devices (e.g., transceiver 723)
and/or external computing devices (e.g., computing device 729) and
one or more base assembly transceivers 714. In other words, either
sensor measurements, analyzed sensor measurements and/or movement
instructions may be communicated to a base assembly 710. In
embodiments, some or all of the steps of 783-786 may be repeated
for the received sensor measurements and/or movement instructions
received from an AI device housing sensors, external hardware
device sensors, portable computing device sensors and/or external
computing device sensors, which results in movement of a base
assembly 710 based on the received sensor measurements or
instructions.
[0088] FIG. 7D illustrates movement of a base assembly utilizing a
camera and/or pattern recognition and/or image processing according
to embodiments. In embodiments, a base assembly or movable base
assembly 710 may comprise one or more cameras 726 and may utilize
pattern recognition and/or image processing to identify potential
base movement. In embodiments, in addition or as an alternative, an
AI device or computing device and shading system 700 may comprise
one or more cameras 739 located thereon and/or within and may
communicate images, video and/or sound with a base assembly 710. In
embodiments, in addition or as an alternative, an external hardware
device (e.g., a portable computing device 723) or other computing
devices 729 (e.g., that are part of home security and/or office
building computing systems) may comprise one or more cameras that
communicate images, videos and/or sounds/audio to an AI device or
computing device and shading system 700 and/or a base assembly
710.
[0089] In embodiments, one or more cameras 726 located within a
base assembly 710, one or more cameras 126 in an AI device and
shading system, a portable computing device 723 and/or a remote
computing or hardware device (e.g., 729 (may capture 791 images,
videos and/or sounds adjacent to and/or surrounding a base assembly
710 and/or AI device or computing device housing or body 207. In
embodiments, one or more cameras 726 in a base assembly 710, one or
more cameras in an AI device and shading system, one or more
cameras in a portable computing device 723 and/or remote computing
device (e.g., computing device 729) may communicate 792 captured
images to a processor and/or controller 711 in a base assembly 710.
In embodiments, computer-readable instructions 713 stored in a
memory 712 of a base assembly 710 may be executed by a processor
and/or controller 711 and may analyze 793 captured images to
determine if any patterns and/or conditions are recognized as
requiring movement of an AI device or computing device and shading
system 700 via movement of a base assembly 710. In embodiments,
based on the analyzation and/or pattern recognition of captured
images, video and/or sounds, computer-readable instructions 713 may
generate 794 movement direction values and/or distance values
and/or instructions for a base assembly 710. In embodiments,
computer-readable instructions executed by a processor 711 may
communicate 795 generated direction values and/or distance values
and/or instructions to a base assembly motor controller 715, which
generates messages, commands, and/or signals to cause and/or
activate 796 a drive assembly (e.g., a motor, shaft and/or wheels
or a motor, shaft and/or treads) to move a base assembly 710 based
at least in part on the generated direction values and/or distance
values. In embodiments, computer-readable instructions executed by
a processor of an AI device and shading system, a portable
computing device 723 and/or a computing device 729 may receive
images, videos and/or sounds from cameras on a base assembly 710,
an AI device or computing device and shading system 700, a portable
computing device 723 and/or a computing device 729, analyze the
received images, videos and/or sounds, and may generate 797
direction values and/or distance values or instructions for base
assembly movement. In other words, image recognition or pattern
recognition may be performed at any of the discussed assemblies or
computing devices (e.g., base assembly 710, portable computing
device 723, external computing device 729 and/or AI device or
computing device and shading system 700. In embodiments,
computer-readable instructions executed by processors of an AI
device or computing device and shading system 700, a mobile
computing device 723 and/or a computing device 729 may communicate
798 base assembly direction values and distance values to a base
assembly 710 via a transceiver.
[0090] In embodiments, a base assembly processor/controller 715 may
receive generated direction values and/or distance values and/or
instructions, which generates messages, commands, and/or signals to
cause 796 a drive assembly (e.g., a motor, shaft and/or wheels or a
motor, shaft and/or treads) to move a base assembly 710 based at
least in part on the generated direction values and/or distance
values and/or instructions.
[0091] In embodiments, one or more sensors 719, 721 and/or 722 in a
base assembly 700 may generate sensor readings or measurements In
embodiments, a controller or processor and/or a transceiver 714 may
communicate commands, instructions, signals and/or messages to a
base motor controller 715 to identify movements and/or directions
for a base assembly 700. In response, a shading system controller
send commands, instructions, and/or signals to a base assembly 710
identifying desired movements of a base assembly.
[0092] In embodiments, a base assembly 710 may comprise a
processor/controller 711, a motor controller 715, a motor 716
and/or a drive assembly 717 which physical move a base assembly
710. As described above, many different components, systems and/or
assemblies may communicate instructions, commands, messages and/or
signals to a processor 711 and/or a base assembly motor controller
715. In embodiments, the instructions, commands, messages and/or
signals may correspond to, be related to and/or indicative of
direction values and/or distance values that a base assembly 710
may and/or should move. In embodiments, a base motor controller 715
may receive direction values and distance values or instructions
and convert these pulses into signals, commands and/or messages for
a motor and/or turbine 716. In embodiments, a motor and/or turbine
716 may be coupled, attached and/or connected to a driving assembly
717. In embodiments, a driving assembly 717 may drive a base
assembly 710 to a location based at least in part on direction
values and/or distance values. In embodiments, a driving assembly
717 may comprise one or more shafts, one or more axles and/one or
more wheels 718. In embodiments, a motor 716 generates signals to
cause shafts to rotate, axles to rotate, and/or wheels to spin
and/or rotate which causes a base assembly 710 to move. In
embodiments, a driving assembly 717 may comprise one or more
shafts, one or more conveying devices and one or more treads (e.g.,
tread assemblies). In embodiments, a motor 716 may generates
signals, messages and/or commands to cause one or more shafts to
rotate, which may cause one or more conveying devices to rotate,
which in turns causes treads (and/or tread assemblies) to rotate
and travel about a conveying device, where the one or more treads
(and/or tread assemblies) cause a base assembly 710 to move.
[0093] In embodiments, a motor and drive assembly may be replaced
by an air exhaust system and air exhaust vents. In embodiments, a
motor controller 715 may be replaced by an exhaust system
controller. In embodiments, an exhaust system controller may
receive instructions, commands, messages and/or signals from a
controller identifying movement distances and directional
measurements for a base assembly 710. In embodiments, an exhaust
system controller may convert the commands, messages and/or signals
into signals and/or commands understandable by exhaust system
components. In embodiments, an exhaust system (or exhaust system
components) may control operation of air exhaust events on a base
assembly 710 in order to move a base assembly a desired direction
and/or distance. In embodiments, a base assembly 710 may hover
and/or glide over a surface when being moved by operation of
exhaust vents.
[0094] In embodiments, a SMARTSHADE and/or SHADECRAFT application)
or a desktop computer application may transmit commands,
instructions, and/or signals to a base assembly 710 identifying
desired movements of a base assembly 710. In embodiments, a base
motor controller 715 may receive commands, instructions, and/or
signals and may communicate commands and/or signals to a base motor
716. In embodiments, a base motor 716 may receive commands and/or
signals, which may result in rotation of a motor shaft. In
embodiments, a motor shaft may be connected, coupled, or indirectly
coupled (through gearing assemblies or other similar assemblies) to
one or more drive assemblies. In embodiments, a drive assembly may
be one or more axles, where one or more axles may be connected to
wheels. In embodiments, for example, a base assembly may receive
commands, instructions and/or signal to rotate in a
counterclockwise direction approximately 15 degrees. In
embodiments, for example, a motor output shaft would rotate one or
more drive assemblies rotate a base assembly approximately 15
degrees. In embodiments, a base assembly may comprise more than one
motor and/or more than one drive assembly. In this illustrative
embodiment, each of motors may be controlled independently from one
another and may result in a wider range or movements and more
complex movements.
[0095] A computing device may be a server, a computer, a laptop
computer, a mobile computing device, a mobile communications
device, and/or a tablet. A computing device may, for example,
include a desktop computer or a portable device, such as a cellular
telephone, a smart phone, a display pager, a radio frequency (RF)
device, an infrared (IR) device, a Personal Digital Assistant
(PDA), a handheld computer, a tablet computer, a laptop computer, a
set top box, a wearable computer, wearable haptic and touch
communication device, a wearable haptic device, a non-wearable
computing device having a touch-sensitive display, a remote
computing device, a single board computer, and/or an integrated
computing device combining various features, such as features of
the forgoing devices, or the like.
[0096] FIG. 8 illustrates various components of an example
computing device 600 that can be implemented as a mobile computing
device, integrated computing device, server, cloud-based server
and/or remote computing devices described in FIGS. 1-7. These
devices may include some but not all of the components identified
below. The device may be implemented as one or combination of a
fixed or mobile device, in any form of a consumer, computer,
portable, user, communication, phone, navigation, gaming, audio,
messaging, Web browsing, paging, media playback, and/or other type
of computing device.
[0097] Electronic or computing device 800 includes communication
transceivers 802 that enable wired and/or wireless communication of
device data 804, such as received data over a low power wireless
protocol or an Ethernet wired protocol. Other example communication
transceivers include NFC transceivers, WPAN radios or transceivers
compliant with various IEEE 802.15 (Bluetooth.TM.) standards, WLAN
radios or transceivers compliant with any of the various IEEE
802.11 (WiFi.TM.) standards, WWAN (3GPP, 4G or 5G-compliant) radios
or transceivers for cellular telephony, wireless metropolitan area
network (WMAN) radios or transceivers compliant with various IEEE
802.16 (WiMAX.TM.) standards, and wired local area network (LAN)
Ethernet transceivers.
[0098] Electronic device 800 may also include one or more data
input ports 806 or interfaces via which any type of data, media
content, and/or inputs may be received, such as user-selectable
inputs, messages, signals, instructions, music, television content,
recorded video content, and any other type of audio, video, and/or
image data received from any content and/or data source. Data input
ports or interfaces 806 may include USB ports, coaxial cable ports,
and other serial or parallel connectors (including internal
connectors) for flash memory, SD memory connectors, network
(Ethernet) connectors, DVDs, CDs, and the like. These data input
ports may be used to couple the electronic device to components,
peripherals, or accessories such as keyboards, microphones, flash
drives, external hard drives, and/or cameras.
[0099] Computing device or electronic device 800 of this example
may include one or more processor systems or processors 808 (e.g.,
any of application processors, microprocessors,
digital-signal-processors, controllers, and the like), or a
processor and memory system (e.g., implemented in a SoC), which
process (i.e., execute) computer-executable or computer-readable
instructions to control operation of the device. Processor system
808 (processor(s) 808) may be implemented as an application
processor, embedded controller, single-board computer,
microcontroller, and the like. A processing system may be
implemented at least partially in hardware, which can include
components of an integrated circuit or on-chip system,
digital-signal processor (DSP), application-specific integrated
circuit (ASIC), field-programmable gate array (FPGA), a complex
programmable logic device (CPLD), and other implementations in
silicon and/or other hardware. For example, in various embodiments,
processors 808 may be general-purpose or embedded processors
implementing any of a variety of instruction set architectures
(ISAs), such as the x86, PowerPC, SPARC, Pentium, or MIPS ISAs, or
any other suitable ISA. In multiprocessor systems, each of
processors 808 may commonly, but not necessarily, implement the
same ISA. Alternatively or in addition, the electronic device can
be implemented with any one or combination of software, hardware,
firmware, or fixed logic circuitry that is implemented in
connection with processing and control circuits, which are
generally identified at 810 (processing and control 810). Although
not shown, electronic device or computing device 800 can include a
system bus, crossbar, mesh, mesh network, or data transfer system
that couples the various components within the device. A system bus
can include any one or combination of different bus structures,
such as a memory bus or memory controller, a peripheral bus, a
serial bus, other component sub architecture, a universal serial
bus, and/or a processor or local bus that utilizes any of a variety
of bus architectures.
[0100] Electronic device or computing device 800 may also include
one or more memory devices 812 that enable data storage, examples
of which include random access memory (RAM), non-volatile memory
(e.g., read-only memory (ROM), flash memory, EPROM, EEPROM, etc.),
and a disk storage device. One or more memory devices 812 may be
configured to store instructions and data accessible by
processor(s) 808. In embodiments, system memory 812 may be
implemented using any suitable memory technology, such as static
random access memory (SRAM), dynamic RAM (DRAM), synchronous
dynamic RAM (SDRAM), nonvolatile/Flash-type memory, or any other
type of memory. One or more memory device(s) 812 provide data
storage mechanisms to store the device data 804, other types of
information and/or data, and various device applications 814 (e.g.,
software applications) (which may be implemented in code or
computer-executable instructions). For example, operating system
software 816 may be maintained as software instructions within one
or more memory devices 812 and executed by processors 808.
[0101] Electronic device 800 may also include audio and/or video
processing system 818 that processes audio data and/or passes
through the audio and video data to audio system 820 and/or to
display system 822 (e.g., monitors, displays, screens, wearable
computing displays (e.g., on spectacles and/or a display on a
wearable computing device, and so on) to output content 818. Audio
system 820 and/or display system 822 may include any devices that
process, display, and/or otherwise render audio, video, display,
and/or image data. Display data and audio signals can be
communicated to an audio component and/or to a display component
via an RF (radio frequency) link, S-video link, HDMI
(high-definition multimedia interface), composite video link,
component video link, DVI (digital video interface), analog audio
connection, or other similar communication link. In embodiments,
audio system 820 and/or display system 822 may be external
components to electronic device or computing device 800 (and/or may
be connected or attached directly) to electronic devices or
computing devices 800. Alternatively or additionally, audio system
820 and/or display system 822 may be an integrated component of the
example electronic device or computing device 800, such as part of
an integrated touch interface (in the case of a display system
822). In embodiments, electronic device or computing device 800 may
further comprise a network interface 850 coupled to I/O interface
or input/output port 806 and/or directly to processor 808.
[0102] In embodiments, I/O ports or interface 806 may be configured
to coordinate I/O traffic between one or more processors 808, one
or more memory devices 812, and any peripheral devices in the
device, including network interface 850 or other peripheral
interfaces. In some embodiments, I/O interface or port 806 may
perform any necessary protocol, timing or other data
transformations to convert data signals from one component (e.g.,
system memory or memory device 812) into a format suitable for use
by another component (e.g., processor 808). In embodiments, I/O
ports or interfaces 806 may include support for devices attached
through various types of peripheral buses, such as a variant of the
Peripheral Component Interconnect (PCI) bus standard, a serial
component interface, or the Universal Serial Bus (USB) standard,
for example. In embodiments, the function of I/O interface 806 may
be split into two or more separate components, for example. Also,
in some embodiments some or all of the functionality of I/O
interface 806, such as an interface to system memory 812, may be
incorporated directly into processor 808.
[0103] In embodiments, network interface 850 may be configured to
allow data to be exchanged between electronic device or computing
device 800 and other devices 860 attached to a network or networks
855, such as other computer devices, remote computing devices,
servers, cloud-based devices as illustrated in FIGS. 1 through 7,
for example. In embodiments, network interface 850 may support
communication via any suitable wired or wireless general data
networks, such as types of Ethernet network, for example.
Additionally, network interface 850 may support communication via
telecommunications/telephony networks such as analog voice networks
or digital fiber communications networks, via storage area networks
such as Fibre Channel SANs, or via any other suitable type of
network and/or protocol. Further, a computer-accessible medium may
include transmission media or signals such as electrical,
electromagnetic, or digital signals, conveyed via a communication
medium such as a network and/or a wireless link, such as may be
implemented via network interface 850.
[0104] Memory, in a computing device and/or a modular umbrella
shading system, interfaces with computer bus and/other
communication channels, so as to provide information stored in
memory to processor during execution of software programs such as
an operating system, application programs, device drivers, and
software modules that comprise program code or logic, and/or
computer-executable process steps, incorporating functionality
described herein, e.g., one or more of process flows described
herein. CPU first loads and/or accesses computer-executable process
or method steps or logic from storage, storage medium/media,
removable media drive, and/or other storage device. CPU can then
execute the stored process steps in order to execute the loaded
computer-executable process steps. Stored data, e.g., data stored
by a storage device, can be accessed by CPU during the execution of
computer-executable process steps.
[0105] Non-volatile storage medium/media is a computer readable
storage medium(s) that can be used to store software and data,
e.g., an operating system and one or more application programs, in
a computing device or one or more memory devices of an intelligent
umbrella and/or robotic shading system. Persistent storage
medium/media also be used to store device drivers, (such as one or
more of a digital camera driver, motor drivers, speaker drivers,
scanner driver, or other hardware device drivers), web pages,
content files, metadata, playlists, data captured from one or more
assemblies or components (e.g., sensors, cameras, motor assemblies,
microphones, audio and/or video reproduction systems) and other
files. Non-volatile storage medium/media can further include
program modules/program logic in accordance with embodiments
described herein and data files used to implement one or more
embodiments of the present disclosure.
[0106] A computing device or a processor or controller may include
or may execute a variety of operating systems, including a personal
computer operating system, such as a Windows, iOS or Linux, or a
mobile operating system, such as iOS, Android, or Windows Mobile,
Windows Phone, Google Phone, Amazon Phone, or the like. A computing
device, or a processor or controller in an intelligent shading
controller may include or may execute a variety of possible
applications, such as a software applications enabling
communication with other devices, such as communicating one or more
messages such as via email, short message service (SMS), or
multimedia message service (MMS), FTP, or other file sharing
programs, including via a network, such as a social network,
including, for example, Facebook, LinkedIn, Twitter, Flickr, or
Google+, Instagram and/or Ato provide only a few possible examples.
A computing device or a processor or controller in an intelligent
shading object may also include or execute an application to
communicate content, such as, for example, textual content,
multimedia content, or the like. A computing device or a processor
or controller in an intelligent umbrella or robotic shading system
may also include or execute an application to perform a variety of
possible tasks, such as browsing, searching, playing various forms
of content, including locally stored or streamed content. The
foregoing is provided to illustrate that claimed subject matter is
intended to include a wide range of possible features or
capabilities. A computing device or a processor or controller in an
intelligent shading object and/or mobile computing device may also
include imaging software applications for capturing, processing,
modifying and transmitting image, video and/or sound files
utilizing the optical device (e.g., camera, scanner, optical
reader) within a mobile computing device and/or an intelligent
umbrella or robotic shading system.
[0107] Network link typically provides information communication
using transmission media through one or more networks to other
devices that use or process the information. For example, network
link may provide a connection through a network (LAN, WAN,
Internet, packet-based or circuit-switched network) to a server,
which may be operated by a third party housing and/or hosting
service. For example, the server may be the server described in
detail above. The server hosts a process that provides services in
response to information received over the network, for example,
like application, database or storage services. It is contemplated
that the components of system can be deployed in various
configurations within other computer systems, e.g., host and
server.
[0108] For the purposes of this disclosure a computer readable
medium stores computer data, which data can include computer
program code that is executable by a computer, in machine-readable
form. By way of example, and not limitation, a computer-readable
medium may comprise computer readable storage media, for tangible
or fixed storage of data, or communication media for transient
interpretation of code-containing signals. Computer readable
storage media, as used herein, refers to physical or tangible
storage (as opposed to signals) and includes without limitation
volatile and non-volatile, removable and non-removable media
implemented in any method or technology for the tangible storage of
information such as computer-readable instructions, data
structures, program modules or other data. Computer readable
storage media includes, but is not limited to, DRAM, DDRAM, RAM,
ROM, EPROM, EEPROM, flash memory or other solid state memory
technology, CD-ROM, DVD, or other optical storage, magnetic
cassettes, magnetic tape, magnetic disk storage or other magnetic
storage devices, or any other physical or material medium which can
be used to tangibly store the desired information or data or
instructions and which can be accessed by a computer or
processor.
[0109] For the purposes of this disclosure a system or module is a
software, hardware, or firmware (or combinations thereof), process
or functionality, or component thereof, that performs or
facilitates the processes, features, and/or functions described
herein (with or without human interaction or augmentation). A
module can include sub-modules. Software components of a module may
be stored on a computer readable medium. Modules may be integral to
one or more servers, or be loaded and executed by one or more
servers. One or more modules may be grouped into an engine or an
application.
[0110] Those skilled in the art will recognize that the methods and
systems of the present disclosure may be implemented in many
manners and as such are not to be limited by the foregoing
exemplary embodiments and examples. In other words, functional
elements being performed by single or multiple components, in
various combinations of hardware and software or firmware, and
individual functions, may be distributed among software
applications at either the client or server or both. In this
regard, any number of the features of the different embodiments
described herein may be combined into single or multiple
embodiments, and alternate embodiments having fewer than, or more
than, all of the features described herein are possible.
Functionality may also be, in whole or in part, distributed among
multiple components, in manners now known or to become known. Thus,
myriad software/hardware/firmware combinations are possible in
achieving the functions, features, interfaces and preferences
described herein. Moreover, the scope of the present disclosure
covers conventionally known manners for carrying out the described
features and functions and interfaces, as well as those variations
and modifications that may be made to the hardware or software or
firmware components described herein as would be understood by
those skilled in the art now and hereafter.
[0111] While certain exemplary techniques have been described and
shown herein using various methods and systems, it should be
understood by those skilled in the art that various other
modifications may be made, and equivalents may be substituted,
without departing from claimed subject matter. Additionally, many
modifications may be made to adapt a particular situation to the
teachings of claimed subject matter without departing from the
central concept described herein. Therefore, it is intended that
claimed subject matter not be limited to the particular examples
disclosed, but that such claimed subject matter may also include
all implementations falling within the scope of the appended
claims, and equivalents thereof.
* * * * *