U.S. patent application number 14/583655 was filed with the patent office on 2015-05-07 for intelligent rendering of information in a limited display environment.
The applicant listed for this patent is Seven Networks, Inc.. Invention is credited to Michael Fleming.
Application Number | 20150128064 14/583655 |
Document ID | / |
Family ID | 42307179 |
Filed Date | 2015-05-07 |
United States Patent
Application |
20150128064 |
Kind Code |
A1 |
Fleming; Michael |
May 7, 2015 |
INTELLIGENT RENDERING OF INFORMATION IN A LIMITED DISPLAY
ENVIRONMENT
Abstract
A method and related system are provided. The method includes
processing one or more inputs by a computing device that are
received from one or more input sources to determine a command that
corresponds to the one or more inputs and exposing the command to
one or more controls that are implemented as software that is
executed on the computing device and that have subscribed to the
command.
Inventors: |
Fleming; Michael; (San
Leadro, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Seven Networks, Inc. |
San Carlos |
CA |
US |
|
|
Family ID: |
42307179 |
Appl. No.: |
14/583655 |
Filed: |
December 27, 2014 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
12970452 |
Dec 16, 2010 |
|
|
|
14583655 |
|
|
|
|
11227013 |
Sep 14, 2005 |
7877703 |
|
|
12970452 |
|
|
|
|
60661757 |
Mar 14, 2005 |
|
|
|
Current U.S.
Class: |
715/744 |
Current CPC
Class: |
G09G 2340/145 20130101;
G06F 3/0484 20130101; G06T 11/20 20130101; G06F 9/541 20130101;
G06F 3/0482 20130101; G09G 5/14 20130101; G06F 9/542 20130101 |
Class at
Publication: |
715/744 |
International
Class: |
G06F 3/0484 20060101
G06F003/0484 |
Claims
1. A method comprising: processing one or more inputs by a
computing device that are received from one or more input sources
to determine a command that corresponds to the one or more inputs;
and exposing the command to one or more controls that are
implemented as software that is executed on the computing device
and that have subscribed to the command.
2. A method as described in claim 1, wherein the processing is
configured to be performed for a plurality of different types of
input sources.
3. A method as described in claim 2, wherein the exposing of the
command is performed such that the command is not indicative of the
type of input source used to provide the command.
4. A method as described in claim 1, wherein the processing
includes processing an output of a translation module that is
configured to translate source-specific information of a
corresponding said input source to an application-readable
format.
5. A method as described in claim 1, wherein the processing
includes normalization of the one or more inputs to produce a
lower-bandwidth representation of the one or more inputs.
6. A method as described in claim 1, wherein the processing
includes conversion of input-specific data into the command such
that the command includes command-specific data that is
semantically relevant to the command.
7. A method as described in claim 1, wherein: the processing
includes a determination of whether to invoke the command based on
the one or more input sources and a definition of the command; and
the exposing is performed responsive to the determination that the
command is to be invoked.
8. A method as described in claim 1, wherein the determination is
based on a threshold included in the definition of the command or
upon successful recognition of the one or more inputs.
9. A method as described in claim 1, wherein the exposing is
performed via message passing, event, or setting a state that is
polled by the software that implements the one or more controls on
the computing device.
10. A system comprising: an adaptation module implemented at least
partially in hardware of a computing device to convert one or more
inputs received from one or more input sources into one or more
corresponding commands; and a notification module implemented at
least partially in hardware of the computing device to notify one
or more controls of the computing device of the one or more
commands.
11. A system as described in claim 10, further comprising a
translation module implemented at least partially in hardware of
the computing device to translate data from the one or more input
sources from source-specific information into a format that is
understandable by the adaptation module.
12. A system as described in claim 10, wherein the adaptation
module is configured to process inputs from a plurality of
different types of input sources into one or more corresponding
commands that are not indicative of the type of input sources used
to provide the one or more inputs.
13. A method comprising: processing a first input by a computing
device that is received from a first input source to determine a
command that corresponds to the first input; responsive to the
processing of the first input, exposing the command to one or more
controls that are implemented as software that is executed on the
computing device; processing a second input by a computing device
that is received from a second input source to determine that the
command corresponds to the second input, the second input source of
a type that is different than the first input source; and
responsive to the processing of the second input, exposing the
command to the one or more controls.
14. A method as described in claim 13, wherein at least one of the
first or second inputs is input via a gesture.
15. A method as described in claim 14, wherein the other of the
first or second inputs is not input via a gesture.
16. A method as described in claim 16, wherein the exposing of the
first and second commands is performed without indicating a
respective said type of the first and second input sources,
respectively.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority to U.S. patent application
Ser. No. 12/970,452 filed on Dec. 16, 2010 entitled "Intelligent
Render of Information in a Limited Display Environment", which
claims priority to U.S. patent application Ser. No. 11/227,013
filed on Sep. 14, 2005 entitled "Intelligent Render of Information
in a Limited Display Environment", now U.S. Pat. No. 7,877,703,
which claims priority to U.S. Provisional Patent Application No.
60/661,757 filed on Mar. 14, 2005 entitled "Agnostic User Interface
for Use in Mobile Devices." The entire disclosure of all of these
applications is incorporated by reference herein.
[0002] The application is related to U.S. patent application Ser.
No. 11/123,540 filed on May 5, 2005 entitled "Universal
Text-Entry." This application is related to U.S. patent application
Ser. No. 11/227,323 filed on Sep. 14, 2005 entitled "Cross Platform
Event Engine." The application is further related to U.S. patent
application Ser. No. 11/227,272 filed on Sep. 14, 2005 entitled
"Platform Neutral User Interface for Mobile Devices." The entire
disclosure of each of these applications is incorporated by
reference herein.
FIELD OF THE INVENTION
[0003] The present invention generally relates to the field of user
interfaces. More specifically, the present invention relates to the
use of user interfaces across various operating platforms in
various mobile devices.
DESCRIPTION OF THE RELATED ART
[0004] Mobile data access devices make it simple and affordable to
access corporate and personal data while out of the office.
Software allowing for such access is becoming a standard feature on
a variety of mobile devices and platforms: BREW, Pocket PCs,
Smartphones, Symbian-based phones, PDAs and Internet browsers.
[0005] There are approximately 35 million workers that make up the
`mobile workforce,` that is, individuals who carry out all or
substantial portions of their job away from a physical office
setting. Despite the increasing number of on-the-go workers,
electronic mail remains, arguably, the most important business
application. As a result, this workforce and the casual individual
user have an inherent need for wireless access to their electronic
mail and other data.
[0006] Despite the pervasiveness of electronic mail and an
ever-increasing need for access to electronic mail and data, costs
of ownership for mobile data access remains a barrier. The issue is
no longer whether mobile data access is a necessity but whether it
can be deployed and managed in an effective manner.
[0007] While cost is an obvious concern in equipping the workforce
with the means for accessing data on-the-go, the implementation,
development, integration and management of mobile data access
solutions are also a key concern. And while mobile devices are
becoming a staple in personal and commercial enterprise, other
rapidly evolving changes such as number portability, mergers in the
telecom industry and the lack of any one particular technical
standard in the mobile device technological space, make providing
support for a wide-array of devices as important an issue as any
with regard to accessing data from a mobile device. The lack of
internal expertise, the immaturity of standards, complexity of
integration, device limitations, and application development have
all been explicitly recognized as barriers to adopting mobile
devices for providing access to data while, for example, out of the
office or away from a personal desktop computer.
[0008] Increased device flexibility as may be provided by device
agnostic software allows for consolidation of multiple application
utilities and also reduces overall outlays on hardware (e.g., a
single application can be run on various mobile devices and as
could a piece of hardware, such as a synchronization cable). This
flexibility also improves IT-familiarity and expertise and,
likewise, with end users, which better ensures adoption of mobile
device technologies in their fullest sense thereby better ensuring
a return on investment.
[0009] As adoption and pervasiveness of mobile devices and
operating platforms increase, developing agnostic applications for
mobile devices makes application development and testing less of a
colossal task for software engineers, quality assurance
professional and human factor engineers. The result is better
design and quality assurance.
[0010] User interfaces play a critical role in mobile device
development in that they must not only provide users with access to
mission critical data but deal with the realities of variations in
screen size, pixel density, aspect ratio and screen use
availability; limited memory on a mobile devices client; limited
processing power; general quirkiness between platforms; and,
perhaps most noticeable to the end-user, the limited physical space
for interface with the mobile device. A keyboard, mouse or even a
stylus are normally not available in a traditional wireless or
mobile device. Not only is input difficult, so is viewing a display
rendering information. This is especially true when the mobile
device happens to also be a cellular telephone.
[0011] One solution has been to utilize XML instead of HTML for
pushing content. Using Extensible Stylesheet Language (XSL) allows
for XML-formatter content to be transformed into HTML or other
formats as a particular mobile device might require.
[0012] Nevertheless, engineers must still deal with the fact that
one interface will, often, not be suitable for more than one
primary set of devices. For example, PDAs utilize a stylus and
touch-screen while WAP-compliant mechanisms support telephony
through, for example, WTA.
[0013] Even if a designer is satisfied with limiting an interface
to a particular device, the engineer must still deal with the
nuances of particular device manufacturers (e.g., a Palm PDA versus
a Nokia cell phone) and, in some instances, particular device
models (e.g., PALM VIIx versus Nokia 7110).
[0014] An engineer is still, in many instances, limited by the fact
that they must pre-generate static content and generalize possible
permutations of the interface as they pertain to a particular
device family. This results in delays for delivery of applications,
increased costs in research and development, which inevitably
result in increased costs for the end user.
[0015] There is, therefore, a need in the art for a user interface
that is agnostic with regard to operating platform and device
wherein one client will work on multiple platforms and devices.
[0016] It should be noted, in the course of this disclosure, that
while a device and platform are recognized as distinct--albeit
related--entities, any reference to a device or a platform should
be considered inclusive of both. Similarly, any reference to the
agnosticism of an interface should also be interpreted as
agnosticism for both a device and a platform.
[0017] Further, it should be noted that the disclosed agnostic user
interface is not dependent on the presentation or transmission of
communications data (e.g., electronic mail, calendar, SMS) or
utilization of user data (e.g., data stored on a desktop).
SUMMARY OF THE INVENTION
[0018] The present invention advantageously provides an
advantageous virtual platform agnostic to physical device or
operating platform and comprised of an abstraction layer that
allows for portability across any variety of mobile devices or
operating platforms, especially with regard to user interfaces. The
virtual platform and its abstraction layer allow for a user
interface on a first device to appear identical on a second device
regardless of differences or limitations that may exist between
operating system platforms or devices. By providing an agnostic
user interface application, a user can move effortlessly between
devices should, for example, the need for replacement or repair of
a particular device arise.
[0019] Additionally, the agnosticism of the interface application
makes it possible for software developers and engineers to utilize
one test suite for a variety of devices or platforms when
introducing new features thereby reducing lag-time in getting
application to market as well as R&D costs, which inevitably
translates into savings for the end-user and/or profit increases
for the application and/or device developer/manufacturer.
[0020] The present invention also provides an advantageous means of
highlighting or focusing information on a device to minimize the
display of unnecessary or interfering information relative to
presently important or critical data to be observed by a user.
[0021] The present invention also provides advantageous
intelligence with regard to the display of information on an
as-possible, as-needed and/or as-preferred basis.
[0022] The present invention also provides an advantageous layout
engine wherein non-compatible graphics and/or text to be displayed
on a particular device can be dynamically altered prior to
rendering so that they are rendered without significant layout
errors or disruptions in the user's viewing of the information.
Methods for configuring the layout of information are also
provided.
[0023] The present invention also provides advantageous means of
arranging information on a mobile device in conjunction with a
layout engine through the use of coordinate positioning of
information and/or vector drawing.
[0024] The present invention also provides an advantageous
cross-platform events engine for synthesis of a variety of events
and uniformly acting on disparate event sets wherein an event
request as might be recognized on one device is translated into a
native request recognized on a second device through abstraction
and code sharing. Methods for determining the portability of an
event from a first device to a second device are also provided.
[0025] The present invention also provides an advantageous means of
driving text-entry mechanisms whereby difficulties with adjusting
to device specific constraints such as timeout periods associated
triple-tap text entry or pictographic language are overcome through
the use of an off-screen text buffer. Methods for the entry and
display of text are also provided.
[0026] The present invention also provides an advantageous means of
managing information on a mobile device utilizing five-way
navigation.
BRIEF DESCRIPTION OF THE DRAWINGS
[0027] FIG. 1A illustrates an exemplary embodiment of a device
platform including various operational layers for interaction with
a particular device client.
[0028] FIG. 1B illustrates a device platform including various
operational layers for interaction with a particular device client
as is generally found in the prior art.
[0029] FIG. 2A illustrates a balance of platform specific code and
platform interchangeable code as may generally be found in the
prior art.
[0030] FIG. 2B illustrates an exemplary embodiment of an
abstraction layer and a balance of platform specific code and
platform agnostic code as may be found in an agnostic
interface.
[0031] FIG. 3 illustrates an exemplary embodiment of an abstract
layer comprised of various informational modules.
[0032] FIG. 4 illustrates an exemplary embodiment of a virtual
platform comprised of a shell program and an abstract layer.
[0033] FIG. 5A illustrates the differences in screen display ratio
for two different client devices as found in the prior art.
[0034] FIG. 5B illustrates the problems often associated with a
single graphic element rendered on different client devices with
different display ratios as found in the prior art.
[0035] FIG. 6A illustrates exemplary relative adjustments in an
agnostic user interface.
[0036] FIG. 6B illustrates exemplary dynamic adjustments in an
agnostic user interface as they pertain to a global scaling
feature.
[0037] FIG. 6C illustrates exemplary dynamic adjustments in an
agnostic user interface as they pertain to a zooming feature.
[0038] FIG. 6D illustrates exemplary dynamic adjustments as they
pertain to a `quick-look` or `short attention span` feature in an
agnostic user interface.
[0039] FIG. 7A illustrates a layout engine for controlling an
agnostic user interface.
[0040] FIG. 7B illustrates an exemplary relationship between an
abstraction layer and a rules engine as might be present in a
layout engine in one embodiment of the present invention.
[0041] FIG. 8 illustrates the exemplary rendering of a graphic
image through the use of a coordinate layout system.
[0042] FIG. 9 illustrates the exemplary rendering of graphic
information according to hierarchical limitations and
requirements.
[0043] FIG. 10A illustrates a menu with available and not-available
options as in known in the prior art.
[0044] FIG. 10B illustrates a menu exhibiting intelligent
prioritization of menu commands as governed by their present
availability according to an exemplary embodiment of the present
invention.
[0045] FIG. 10C illustrates a menu exhibiting intelligent
prioritization of menu commands as governed by presently available
and user preferred commands according to an exemplary embodiment of
the present invention.
[0046] FIG. 11A illustrates icons on a display with no particular
limitations as to their rendering.
[0047] FIG. 11B illustrates icons on a display with display
limitations wherein the icons are intelligently selected in an
exemplary embodiment of the present invention.
[0048] FIG. 12 illustrates a cross-platform event engine as may be
utilized in an exemplary embodiment of the present invention.
[0049] FIG. 13A illustrates a portion of a keypad as might be
utilized in triple-tap or triple-press text entry on a mobile
device as is known in the prior art.
[0050] FIG. 13B illustrates a mobile device utilizing the T9
text-entry methodology as is known in the prior art.
[0051] FIG. 14A illustrates an exemplary embodiment of the present
invention wherein an on-screen text box is synchronized with an
off-screen text buffer.
[0052] FIG. 14B illustrates a string of text as may be found in an
off-screen text buffer in an exemplary embodiment of the present
invention.
[0053] FIG. 15 illustrates an exemplary method for utilizing an
off-screen text buffer in an embodiment of the present
invention.
[0054] FIG. 16 illustrates an exemplary method for utilizing a
layout engine to display graphics and/or text in an embodiment of
the present invention.
[0055] FIG. 17 illustrates an exemplary method for utilizing a
cross-platform events engine to execute cross-platform events in a
native environment in an exemplary embodiment of the present
invention.
[0056] FIG. 18A illustrates the display of information on a mobile
device as may be found in the prior art.
[0057] FIG. 18B illustrates the exemplary management of information
displayed in FIG. 18A using five-way navigation in an embodiment of
the present invention.
DETAILED DESCRIPTION OF AN EXEMPLARY EMBODIMENT
[0058] FIG. 1A illustrates an exemplary embodiment of a device
platform including various operational layers for interaction with
a particular client device. The present embodiment comprises a
platform 110, abstraction layers 120, synchronization module 130,
user interface framework 140, and client device 150.
[0059] Some embodiments of the present invention may comprise
additional operational layers such as open or proprietary
application program interfaces (APIs) that allow software
engineers, programmers, and other users to author or install
applications that are consistent with the particular platform's
operating environment. Some embodiments of the present invention
may also lack certain operational layers, such as the
synchronization layer, should a particular device or platform not
provide for synchronization operations.
[0060] The platform 110 is the underlying hardware or software for
a particular operating environment. Platform 110 also defines a
standard around which the particular operating environment is
developed, that is, around which software, hardware and other
applications can be developed. An example of the platform 110 is
the Nokia Series 40 Developer Platform, which can utilize platform
technologies such as Java.TM. J2ME or the Nokia Series 60 and
Series 80 Developer Platforms, which can utilize C++ in addition to
Java.TM. J2ME platform technologies. Similarly, the Palm OS.RTM.
Platform supports native programming in C and C++ languages as well
as supporting Java programming via third-party Java Virtual
Machines.
[0061] Abstraction layers 120 are composed of basic functionalities
that allow for, in part, the integration of platform 110 with
client device 150 as well as other operational layers such as
synchronization module 130 and user interface framework 140. The
abstraction layers 120 also declare classes, interfaces, and
abstract methods intended to support various functions and system
operations in any particular platform 110. Abstraction layers 120
may be open or proprietary and are often composed of various
modules (e.g., FIG. 3).
[0062] Optional synchronization module 130 comprises the various
operational instructions, functionalities, and code necessary to
allow a particular client device 150 to synchronize with an
external device, such as a desktop personal computer or enterprise
server. Synchronization can be achieved in a variety of ways
including a cable-to-handset synchronization mechanism whereby the
client device 150 is physically coupled to a desktop personal
computer to allow for the exchange and synchronization of data
(e.g., electronic mail). Synchronization and optional
synchronization module 130 are not to be construed as necessary for
the operation of an agnostic user interface.
[0063] Synchronization can also be achieved wirelessly whereby an
enterprise server (e.g., a Microsoft Exchange Server) configured
with appropriate software (e.g., SEVEN Server Edition from SEVEN
Networks, Inc.) and with access to a wireless gateway allows for
real-time access to electronic mail and other data by the client
device 150 without any physical connection to the enterprise
server. While the synchronization module 130 may be necessary for
synchronizing the client device 150 and other external device
(e.g., a server), the presence of such a module is not meant to be
interpreted as a prerequisite for the operation of an agnostic user
interface.
[0064] User interface framework 140 comprises various libraries and
source code to allow for the rendering of a user interface on a
particular client device 150. User interface framework 140
libraries include elements such as icons, cursors, scroll bars,
sounds, animations, etc.
[0065] Client device 150 is any device coupled to a network (e.g.,
wirelessly) that allows for access to a server device or other
computing entity, such as a second client device. Through the
coupling of the client device 150 to the server, the user of the
client device 150 can synchronize data such as electronic mail or
access data. Examples of client device 150 include Pocket PCs,
Smartphones, and PDAs. Client devices 150 are increasingly mobile.
This mobility is often a direct result of integrating the client
device 150 with, for example, a cellular telephone although it is
not necessary for the client device 150 to be integrated with a
mobile phone or any other device. Client devices 150 are often
associated with a particular platform 110.
[0066] For example, the aforementioned Nokia Series 40 Developer
Platform is associated with the Nokia 6101 and 6102 model client
devices as well as the Nokia 6020, 6235, 6235i and 6822 model
client devices. The Nokia Series 60 Developer Platform, on the
other hand, is associated with client devices such as the Nokia
6680, 6681, and 6682 model devices. Similarly, the Palm OS.RTM.
Platform is associated with client devices such as Xplore.TM. G18,
Kyocera 7135, and the Treo.TM. 650.
[0067] FIG. 1B illustrates a device platform including various
operational layers for interaction with a particular device client
as is generally found in the prior art.
[0068] A device platform found in the prior art shares limited
similarities with a device platform as might be found in an
embodiment of the present invention in that a prior art device
platform comprises the actual platform and various operational
layers such as synchronization modules, APIs, and so forth. Prior
art device platforms differ from a platform as might be found in an
embodiment of the present invention in that the client, user
interface framework and abstraction layer are more integrated and
operationally incorporated (160) as compared to the present
invention (170). The `tightly wound` nature of prior art devices is
often the result of a general lack of portability of a user
interface or any other aspect of the particular platform between
various client devices. That is, a particular user interface and
accompanying abstraction layer are written exclusively for a
particular platform and exclusively for a particular device solely
in conjunction with that platform.
[0069] The exemplary device platform illustrated in FIG. 1A
evidences the ability to transport various aspects of a particular
platform (e.g., a user interface) from one client device 150 to the
next, especially with regard to the design of the abstraction layer
120 as is further discussed in the context of FIGS. 2A and 2B,
below.
[0070] It should be noted that while FIG. 1A illustrates various
operational layers as separate elements, this is not to suggest a
necessary physical differentiation or a general lack of integration
in an embodiment. Similarly, the integration of the client, user
interface framework and abstraction layer (160) in FIG. 1B is not
meant to suggest a literal, physical integration. These
illustrations are provided merely to aid in the perception of the
`tightly wound` and vertically integrated aspects of a prior art
device platform versus a device platform, as in an embodiment of
the present invention, allowing for portability of, for example, a
user interface from one device to another.
[0071] FIG. 2A illustrates a balance of platform specific code 210
and platform agnostic code 220 as may generally be found in the
prior art.
[0072] For example, and as described in the context of FIG. 1B,
prior art platform devices are unitary in nature and not meant to
allow for portability of features, such as a user interface. As
such, a prior art abstraction layer 200 is comprised predominantly
of platform-specific and device-specific code 210. This
particularized code, while allowing for the integration and
operation of a particular device on a particular platform, inhibits
the portability of any particular features from one device to
another (e.g., a user interface) as might otherwise be provided for
with more general or agnostic code 220. Agnostic code 220 might
comprise code written in accordance with particular industry
standards or specifications but that allows for the portability or
interoperability of a specific and particular feature amongst
devices.
[0073] FIG. 2B illustrates an exemplary embodiment of an
abstraction layer 250 and a blend of platform specific code 260 and
agnostic code 270 as might be found in an agnostic user
interface.
[0074] An abstraction layer 250, as may be found in an embodiment
of the present invention and as illustrated in FIG. 2B, exhibits a
much `thinner` layer of platform- or device-specific code 260.
Abstraction layer 250 with its thin layer of platform- or
device-specific code may be, generally, the type of abstraction
layer 120 as described in FIG. 1A. As the abstraction layer 250 is
comprised more of standardized or agnostic code 270, the
portability or interoperability of particular features is increased
in that a feature will operate on various platforms or devices due
to its coding being dependent more on the generalized code 270 than
with platform- or device-specific code 260 that limits or inhibits
portability or interoperability.
[0075] FIG. 3 illustrates an exemplary embodiment of an abstract
layer 310 comprised of various informational modules 310-350.
[0076] Informational module 310-350 are routines and instructions
as they pertain to various operational features of a particular
platform 110 and/or client device 150 linked in the abstraction
layer 310. For example, resource module 320 may comprise specific
data or routines utilized in the operation of platform 110 and/or
device 150 (e.g., sleep mode, power on and off). Graphics module
330 may comprise specific files such JPEGs, bitmaps or other
graphic data that could be utilized by user interface framework 140
in its rendering of a user interface on client device 150. Event
module 340 may comprise a library of actions or occurrences as
might be detected by a particular program such as user actions
(e.g., pressing a key) in addition to system occurrences (e.g., an
internal calendar alarm). Sound module 350 may comprise various
sounds (e.g., WAV files) to be generated in response to, for
example, the occurrence of certain system events (e.g., system
warnings concerning low battery power).
[0077] Abstract layer 310, as it corresponds to abstract layer 120
(FIG. 1) and abstract layer 250 (FIG. 2) may comprise additional or
less modules as is required by the particular platform 110 and/or
client device 150. It should also be noted that while FIG. 3
illustrates various modules as separate elements, this is not to
suggest the requirement of a physical differentiation or a general
lack of integration in an embodiment of the present invention.
[0078] FIG. 4 illustrates an exemplary embodiment of a virtual
platform 400 comprised of a shell program 410 and an abstract layer
420.
[0079] Abstract layer 420 is a layer similar to that described in
FIG. 3. Abstract later 420 interacts with a shell program 410 to
effectively translate or otherwise offer portability of commands or
instructions from one platform or device to a second platform or
device. For example, if an event 430 (e.g., an extended button
press) occurs on a particular platform (e.g., the Nokia Series 40
Developer Platform) that event 340 may not be immediately
recognized on a Palm OS.RTM. platform; the virtual platform 400
provides the necessary translation between the two.
[0080] The event 430 or certain information generated by the event
430 is intercepted by the shell program 410. The shell program 410
prevents the event 430 or the information generated by the event
430 from being immediately processed by any relevant logic on the
device or platform. The abstract layer 420 then processes the event
430 intercepted by the intermediary shell program 410 and
determines the proper response 440 to the event 430 for the
particular platform hosting the virtual platform 400.
[0081] For example, the extended button press on the Nokia Series
40 Developer Platform, in that particular operating environment,
might be equated to activating a backlight for a display screen. In
another operating environment, however, an extended button press
might be associated with sending a device into a `sleep` state or
may lack an associated function altogether. In this instance, and
absent the virtual platform 400, if a Nokia platform were operating
on a Treo.TM. 650 device, an extended button press by the user that
was meant to activate the backlight could result in sending the
device into hibernation or even a system crash for lack of an
associated command string.
[0082] Utilizing the virtual platform 400, however, the shell
program 410 would intercept and recognize the Nokia platform button
press event 430 and communicate with the abstract layer 420 in
order to translate the event 430 into the proper related response
440 for the Treo.TM. device, which might normally be a double press
of a particular button.
[0083] FIG. 5A illustrates the differences in screen display ratio
for two prior art client devices, specifically a TREO.TM. 650 510
and a Nokia 6680 520. In the case of the TREO.TM. 650 client
device, the screen display offers 320.times.320 pixel-width with
16-bit color; the display offers approximately 65,000 colors. In
the case of the Nokia 6680 client device, the screen display offers
176.times.208 pixel-width with active matrix; the display offers
approximately 262,144 colors.
[0084] FIG. 5B illustrates the problems often associated with a
single graphic element rendered on different client devices with
different display ratios as found in the prior art. For example, a
graphic 530 might be approximately 300 pixels in width and renders
without issue on device 510 with a 320 pixel-width. That same
graphic, in the context of device 520 with a 176 pixel-width,
however, might be distorted 540 in that it is `cut off` due to the
limited screen width. This distortion is often the result of
different devices and/or platform rendering the same graphic. This
distortion can be especially problematic in the context of user
interfaces offered by third-party software service providers either
for functionality and/or branding purposes.
[0085] The agnostic user interface as described herein aids in
preventing, inter alia, inevitable pixel variances and other
differences between devices and/or platforms from resulting in the
distorted 540 image as shown in device 520 in FIG. 5B. The agnostic
user interface will specify a particular layout but also provide
for adjustment of the interface depending on the particular nuances
of any particular platform or device, for example, screen width as
evidenced in FIGS. 5A and 5B. These adjustments can be relative
(e.g., as a result of screen width) or `as needed` or `dynamic` per
the particular demands of a user of any particular device.
[0086] An example of relative adjustments in a client device is
illustrated in FIG. 6A. Client device 605 is similar in size to
client device 510 in FIGS. 5B and 5A (320.times.320). Graphic 610
is rendered on client device 605 in a size that is relative to the
pixel limitations of the screen. Graphic 620 is similarly rendered
on client device 615, which is similar in size to client device 520
in FIGS. 5B and 5A (208.times.176). Instead of graphic 620
appearing distorted as it did in FIG. 5B (540), the platform
agnostic interface has provided a generally identical screen layout
but made automatic adjustments for the graphic 620 to appear
relative to the constraints of the client device 615.
[0087] FIG. 6B illustrates dynamic adjustments in a user interface
as they pertain to a global scaling feature. In some instances, a
particular device will be unable to allow for relative adjustment
of a user interface. This might be a result of screen size
limitations or the inability to render certain graphics. In these
instances the agnostic user interface can make intelligent
decisions with regard to what information should be relatively
adjusted, which information cannot be relatively adjusted (for
varying reasons, for example, the critical importance of certain
information), and certain information which should be dropped from
the display altogether.
[0088] As shown in device 625, a display screen is shown with
certain user interface information 635 such as a tool bar and
various short-cut keys such as phone, home, contacts, trash, notes,
and electronic mail. In a device 630 with limited screen size,
relative adjustments to all this information might make the
short-cut key and tool bar entirely illegible due to excessive
decreases in size and/or overcrowding on the display. In these
instances, the agnostic user interface will make intelligent
decisions with regard to what information must remain present and
the limits on certain relative adjustments of information.
[0089] For example, in device 630 with a platform agnostic user
interface, user interface information 640 has been adjusted to
address the limitations of the screen size. Specifically, certain
short-cut keys (electronic mail, home, contacts, and phone) have
been entirety removed from the display. While these functionalities
remain present in the device, their short-cut key has merely been
removed from the screen and now requires a button-press or access
through a tree-menu or some other means of access as might be
dependent on the exact structure of the user interface.
Additionally, while other short-cut keys have been reduced in size,
other keys remain more prominent. This can be a result of default
settings that identify certain features as being more mission
critical than others or as a result of specific user settings.
[0090] An example of dynamic adjustment in a user interface as it
pertains to a zooming feature is illustrated in FIG. 6C. For
example, a user device 645 is shown listing several electronic
mails of the user. In an effort to provide the user with as much
information as possible, electronic mail information is presented
in a small font size making it difficult for a user to sometimes
comprehend the information presented on the device 645. Utilizing a
dynamic adjustment zooming feature, as a user scrolls up and down
the list of electronic mails, a highlighted or selected electronic
mail 650 is magnified or `zoomed` whereby the font size is
increased and all other electronic mails present on the device 645
are either further reduced in size whereby all information remains
on the screen but in varying sizes or certain electronic mail
listing are `dropped` from the screen (e.g., instead of ten
commonly sized electronic mail listings, zooming-in on any
particular electronic mail message will result in one magnified
message and seven messages at the original size with the other two
messages `dropped` from the screen).
[0091] FIG. 6D illustrates dynamic adjustments in a user interface
as those adjustments might pertain to a `quick-look` or `short
attention span` feature. For example, providing the user with all
possible available information and in a minute font size may be
appropriate when a user of device is able to provide their
undivided attention to the device and focus attentively on that
information as is shown in device 655. In device 655, the user is
presented with time and date information 660, various feature or
short-cut keys 665 (e.g., phone, Internet, electronic mail,
calendar, contacts, notepad) and a tool bar 670.
[0092] In some instances, however, a user may be unable to direct
their undivided attention to their device as they might be walking
while reviewing their device or driving a vehicle. In these
instances, the user is forced to divide their attention; for
example, ensuring the user does not accidentally walk into another
person or veer off the road. The user, to the extent it is
necessary for them to access their device with divided attention,
often only need to take a `quick-look` at information. Device 675
illustrates a user interface whereby a `quick look feature` is
enabled whereby only essential information is displayed. For
example, in device 675 with a quick-look interface enabled, the
user is still presented with time and date information 680 but that
information is enlarged in size and takes up twice as much space as
the time and date information 660 in non-quick-look enabled device
655. Additionally, the short cut keys 685 have been reduced in
number to only those of utmost importance. In this case, those keys
are phone, calendar, and contacts and they are displayed at nearly
three-times their normal size. Further, the tool bar 690 has been
totally dropped from the screen as it is unlikely a user will be
performing maintenance or adjusting settings on their device 675
while only able to offer a short amount of attention.
[0093] In a short-attention span or quick-look mode, the adjustment
and selection of features to be displayed and, likewise, those
features removed from the display can be set by default (e.g.,
factory settings) or they can be modified by the user as they
reflect the user's needs. Furthermore, using an agnostic user
interface, the displayed information will adjust in size as is set
forth by the default settings or the user in conjunction with
certain limitation posed by the actual device (e.g., screen
size).
[0094] FIG. 7A illustrates an embodiment of a layout engine 700
that may be found in particular embodiments of the presently
described agnostic user interface. Layout engine 700 comprises a
rules engine 720 and a logic engine 730. An embodiment of the
layout engine 700 provides intelligent flexibility for adjusting
interface layout (e.g., spatial interrelationships between elements
and/or information and/or structural aspects therein) to fit
multiple screen sizes, densities and aspect ratios.
[0095] Rules engine 720 comprises a variety of defined constraints
with regard to the display of user information on the display of a
device. For example, rules engine 720 may be programmed to
understand that the particular device on which the rules engine 720
resides has a limited screen size in terms of pixels or limitations
with regard to the number of colors the display can render. Other
rules might include this display of certain language or file
formats (e.g., HTML, *.pdf, or *.ppt). Additional rules may be
related to limitations on dedicated processing power for the
rendering of any particular graphic as it pertains to the general
operation of the device or during a particular operation (e.g.,
while downloading content from a website).
[0096] The constraints delineated in the rules engine 720 can be
installed by an original equipment manufacturer or may be subject
to user adjustment (e.g., deactivating default settings).
Constraints in the rules engine 720 may also be updated
automatically during the operation of the device or configured as
the result of intelligent determinations by the device.
[0097] For example, if a rules engine 720 determines that it is
resident on a device for which it does not know the pixel
limitations of the display, it can make certain assumptions as to
the display size. The rules engine 720 might recognize that the
layout engine 700 is resident on a Nokia 6600 Series phone but not
that it is on a Nokia 6680 phone, in particular. From the rules
engine's 720 knowledge of the Nokia 6600 Series, it can make an
assumption that the pixel limitations are `at least` or `at most`
certain numbers. As a result, the layout engine 700 may not produce
an optimized graphic image on the device but at least one
sufficient to operate and not cause a degraded viewing experience
like that shown in FIG. 5B (520).
[0098] The rules engine 720 can also receive new updates with
regard to device information during a synchronization operation
with a desktop PC or server that hosts other programs related to
the device (e.g., a mail forwarding program that forwards mail from
the desktop to the mobile device). These updates might be
downloaded at the desktop PC or server automatically or as a result
of the user affirmatively downloading an upgrade or patch from the
appropriate provider of that information (e.g., the device
manufacturer or the agnostic interface designer).
[0099] The rules engine 720 can also request the user manually
provide this information if an assumption or synchronization
operation fails to provide the necessary information.
[0100] An input request 710 from the user of the device or a
program running on the device comprises a request to display
certain information on the device; for example, a text box of x*y
pixel size or a particular color. As noted, this request might be
generated by the user during the course of using a drawing
application. Similarly, this request might be generated by a
particular program as a result of the occurrence of a particular
event, for example, an alarm indication that generates a text box
indicating a certain event is about to begin. The input request 710
need not be of any particular format or language so long as it may
be processed by the layout engine 700 with regard to determining
whether the particular text and/or graphic event may be displayed
on the device in accordance with requested size, color,
configuration, etc.
[0101] The layout engine 700 also comprises the aforementioned
logic engine 730. The logic engine 730, based on an input request
710, will query the rules engine 720 to determine if the particular
input request 710 may be processed as requested on the particular
device or if some adjustments will be required with regard to the
limitations of the device as set forth in the rules engine 720. For
example, an input request 710 might request the display of a text
box of x*y size and of a particular shade of aqua. The layout
engine 700's logic engine 730 will identify the requested
parameters (e.g., size and color) and make a query of the rules
engine 720 to determine if the particular device hosting the layout
engine 700 can accommodate the request 710. If the rules engine 720
reflects that the request 710 can be processed and subsequently
rendered without violating a particular rule, the logic engine 730
will approve the request 710 thereby resulting in an output
instruction 740.
[0102] Output instruction 740, like the input request 710, is not
of any particular format or language so long as it may be generated
by the layout engine 700 with regard to indicating that a
particular text and/or graphic event may be displayed on the device
in accordance with requested size, color, configuration, etc.
Instruction 740 need only be capable of subsequently being
processed by the appropriate component of the device providing for
the display of the text and/or graphic event (e.g., a graphics or
rendering engine (not shown)).
[0103] Should the logic engine's 730 query of the rules engine 720
determine that the requested text and/or graphic event cannot be
displayed on the particular device, the logic engine 730 may
further query the rules engine 720 to determine what the particular
constraints of the device are with regard to the rejected event
(e.g., the device cannot display aqua but can display light blue).
This information might also reside directly in the logic engine 730
or at a locale on the device accessible by the engine 730. For
example, information pertaining to commonly requested display
events might be cached in the logic engine 730 or in memory (not
shown) accessible by the logic engine 730.
[0104] Similarly, the logic engine 730, in certain embodiments, may
be trained whereby the logic engine 730 begins to recognize a
repeated display event and without query to the rules engine 720,
understands that such a display event is impossible or otherwise
violates the rules of the device as set forth in the rules engine
720. Through the training of the logic engine 730 and the now
absent need for continued queries to the rules engine 720, the
processing speed of a display event may be increased.
[0105] The logic engine 730, in some embodiments, may also be
expressly instructed by the user (e.g., through pre-programming or
a query during processing) to respond to a particular violation of
a constraint set forth in the rules engine 720 in a particular
manner. For example, if the request 710 pertains to the display of
aqua but the device can only display light blue, the user might
pre-program the logic engine 730 to display sea-foam green instead
of resorting to light blue.
[0106] Once the logic engine 730 determines the constraints of the
particular device in conjunction with the requested event as
reflected by the input request 710, the layout engine will generate
the output instruction 740 that best reflects the scope of the
initial request 710 but while remaining within the particular
constraints as set forth by the rules engine 720 or, in some
embodiments, as directly instructed by the user. For example, the
logic engine 730 may resort to the aforementioned example of light
blue versus aqua.
[0107] By further example, if a request 710 pertains to the display
of a graphic or text information that exceeds the size of the
actual device (for example, as illustrated in FIG. 5B (540)), the
logic engine may determine what information is necessary to be
displayed to carry out the scope of the initial request 710.
[0108] For example, request 710 might pertain to displaying a
user's contacts directory. On one device, the display of the
directory might normally result in the concurrent display of the
date and time as well as a telephone icon whereby a user can
highlight a particular name in the contact directory and then `tap`
the telephone icon resulting in the phone dialing the number of the
person in the contact directory (i.e., a speed-dial feature).
[0109] If the physical limitations of a particular device are such
that the time and date, directory and speed-dial icon cannot all be
displayed, the logic engine 730 will determine what information is
critical to the scope of the request 710 and, operating within the
confines of the rules engine 720, generate an output instruction
740 that will result in, for example, the relocation of the
speed-dial icon on the display to a more efficient space, the
reduction in size of the contacts directory (or the display of only
a limited number of names in the directory) and the total removal
of the date and time from the display during this particular
operation.
[0110] An embodiment of the layout engine 700 may also provide for
cross-representation of resources such as bitmaps, templates or
screen layouts, animations and sounds.
[0111] As illustrated in FIG. 7B, the rules engine 720 of the
layout engine 700 may be integrated with the abstraction layer 420
of the virtual platform 400 that allows for the interoperability of
a particular user interface on any variety of devices and/or
platforms. While the layout engine 700 and virtual platform 400
need not necessarily be physically integrated, the agnostic user
interface of the present invention requires that the two components
at least be capable of communicating with one another as to allow
for the translation of what might be a foreign instruction by the
virtual platform 400 into an instruction otherwise comprehensible
by the layout engine 700.
[0112] In some embodiments, the layout engine 700 may be further
integrated with a cross-platform events engine as is described in
FIG. 12.
[0113] In some embodiments of the present invention, the rendering
of an agnostic user interface will be effectuated utilizing vector
graphics although the rendering of agnostic user interface
information may also occur through the use of other graphic
rendering techniques. Vector graphics represent those graphic
images generated from mathematical descriptions that determine the
position, length, and direction in which mathematically-describable
objects--such as lines, ellipses, rectangles, rounded rectangles,
abstract polygons, filled and non-filled regions, gradients,
fountain fills, Bezier curves and so forth--are drawn. Unlike
raster graphics, objects are not created as patterns of individual
dots or pixels. Through utilizing vector graphics, the `look and
feel` of a particular interface is maintained across platforms and
devices thereby resulting in increased scalability as each element
is stored as an independent object.
[0114] Vector graphics also aid with regard to `skinning` whereby
the look of a particular platform or software program is changes
but its underlying functionality remains unaltered. Through the use
of skinning, opportunities for branding, advertising, and user
customization are also increased. Skinning also allows for platform
independence whereby one customized user interface can be ported to
various devices or operating platforms and because of the
utilization of vector graphics versus rasterization or bitmapping,
that one interface can be scaled and adjusted as necessary by, for
example, a layout engine 700 and/or virtual platform 400. The end
result of using vector graphics is that `real space` remains
consistent and relative.
[0115] Graphic renderings may also be expressed as a relationship
between a particular point and its location on a Cartesian grid
(e.g., a grid system). For example, FIG. 8 illustrates such a
Cartesian grid 800. In such a rendering system, a base coordinate
810 is first identified that will serve as the starting point
(either directly or indirectly) for all other graphic information
rendered on a display. In the presently illustrated embodiment, all
points on the Cartesian grid are expressed in the form of pixels.
Other embodiments may utilize any type of scaling unit so long as
it provides a consistent basis for determining distance between
points.
[0116] Rendering a graphic from the base coordinate 810, a second
coordinate 820 may be identified. The second coordinate 820, in the
present example, may be reflected formulaically as a base
coordinate plus a modifier in the context of an overarching
constant (base+% modifier+f(x)). In this example, the constant has
been reflected as pixels, more specifically one pixel; scaling
units other than a pixel can be utilized as can constants other
than one. Second coordinate 820, in this instance, is rendered as a
result of being located on the Y-axis at a 4-times percentage
increase over the Y-axis location of base coordinate 810 in the
context of a 1 pixel scaling unit. In other words, coordinate 820
is located 4 pixels higher on the Y-axis as is base coordinate
810.
[0117] Third coordinate 830 is depicted in a similar fashion
wherein it is located at 4-times the pixel percentage on the X-axis
as from second coordinate 820 and 4-times the pixel percentage of
the X-axis and Y-axis as compared to base coordinate 810. Base
coordinate 810 in conjunction with second 820 and third coordinates
830 result in the rendering of a triangle 840 on the display.
[0118] A coordinate layout system is not meant to be limited to
only a Cartesian grid but also encompasses, for example, polar
coordinates and a three dimensional grid (i.e., x*y*z).
[0119] The final rendering of one object can be used as a base
coordinate for a second object in a semantic coordinate layout
system. For example, third coordinate 830 can be utilized as a
second base coordinate 850 for a new object. That is, the upper
right hand corner of a first object (triangle 840) serves as the
bottom left corner of a second object (square 850).
[0120] For further example, utilizing an exemplary semantic
coordinate layout system, a first base coordinate might be
identified as the upper right hand corner of another object (e.g.,
coordinate 850). In some instances, however, the location of the
upper right hand corner of another object will not be known as a
layout engine 700, for example, may still be determining the locale
of certain information to be rendered.
[0121] Once the layout engine 700 evaluates the layout of a
particular device, the actual location of the upper right hand
corner of another object is ascertained. Once that location is
ascertained (e.g., coordinate 850), semantic coordinates allow for
the rendering of additional coordinates and/or the entire remainder
of an object. This late binding of locations through the use of a
semantic specification (e.g., the upper left of object Y is ten
pixels from the lower right of object X) further allows for
automatic adjustment of layout.
[0122] Formulaic expression may also be used in a semantic
coordinate layout system. For example, (Lower Y=Upper Right X+20%
of Width of X+10 Display Units). In this example, none of the
values are immediately calculable until the layout engine 700
renders object Y and a relationship between display unit and pixel
(or some other base measurement) is determined by the layout engine
700.
[0123] The exemplary formulas provided herein are not meant to be
limiting. Various other formulaic entries may be utilized in the
rendering and layout of objects and information.
[0124] In some embodiments of the present invention, the rendering
of objects or information in a scalable user interface that
operates agnostically of device or platform will often utilize a
combination of vector graphics, a grid system and/or a semantic
coordinate layout system. For example, a line is specified as being
drawn from point x1, y1 to point x2, y2 with a specified line width
and perhaps a specified arc. Points x1, y1 and x2, y2 may be
determined as the result of utilizing a grid layout system.
Individual objects may then be rendered in light of these
coordinates using vector graphics. Additional objects may then be
expressed as semantic coordinates considering certain coordinates
of previously rendered objects.
[0125] Alternatively, some embodiments of the present invention may
utilize bitmapping/rasterization in the context of a particular
layout system (e.g., an object is rendered utilizing a combination
of techniques individually and in conjunction with one another).
For example, utilizing a semantic coordinate system, a base
coordinate for a display button may be indicated as ten display
units right from a previously rendered object. The actual button,
however, may be a bitmap in a library and is rendered on the screen
with its lower-left corner being at the base coordinate as
determined by a semantic coordinate layout system.
[0126] The layout engine 700 of FIG. 7A may operate in conjunction
with various rendering tools to result in scalable or intelligently
placed graphic events. For example, the layout engine 700 may
determine that an input request 710 to render a particular button
or icon cannot be displayed as requested following a query to the
rules engine 720. The logic engine 730, however, may instead
determine what aspects of the particular icon need be adjusted or
scaled (e.g., adjusting the display unit or constant) whereby the
icon is still rendered but on a smaller scale in accordance with
various vector graphic or coordinate layout techniques.
[0127] It should further be noted, as is illustrated in FIG. 9,
that the rendering of graphic events or information can occur
hierarchically. For example, display 900 may exhibit certain
limitations as are recognized by a rules engine 720 (FIG. 7A).
Limitations on display or other events can also be hierarchical and
also stored in the rules engine 720.
[0128] An example of hierarchical limitations is shown whereby a
sidebar 910 is comprised of various smaller icons 920-940. Sidebar
910 may impose its own independent limitations as they pertain to
smaller icons 920-940, that is, smaller icons 920-940 cannot exceed
the width and height of the sidebar 910 just as sidebar 910 may not
exceed the limitations of display 900.
[0129] A similar situation exists with text boxes 950 and 970. Both
text box 950 and text box 970 are comprised of smaller display
elements 960 and 980-990, respectively. Display elements 960 and
980-990 must not exceed the limitations imposed by text boxes 950
and 970 just as those text boxes must not exceed the limitations of
display 900.
[0130] Furthermore, limitations can exist between the smaller
sub-elements of the display. For example, icons 920-940 may have a
limitation wherein they cannot come within X pixels of one another
due to color schemes that might begin to `blend` together and
result in a deteriorated viewing experience.
[0131] Similarly, display elements 980 and 990 may be fixed as to a
certain size that cannot be scaled any larger or smaller due to the
amount of textual information contained therein where, if reduced
any further than its default font size, would render the amount of
text illegible.
[0132] FIG. 10A illustrates a menu 1000 with available and
not-available options as is known in the prior art. Menu 1000
illustrates a number of available menu items 1010 such as "New" and
"Open." Menu 1000 also displays a number of not available menu
items 1020 such as "Close," "Save" and "Properties."
[0133] Available menu items 1010 are those menu items or commands
that are presently available for execution, for example, opening a
new file or opening an existing file. Not available menu items are
those menu items or commands that are not presently available for
execution due to any number of factors. For example, an actual file
or document may not be open. In such a case, a non-existent file
cannot be closed or saved. Similarly, properties as to a
non-existent file cannot be displayed. Should a file actually be
opened, it is possible that not available menu items 1020 may
become available menu items 1010 as that now open file or document
can now be closed, saved or have its properties reviewed.
[0134] In the prior art, not available menu items 1020 are usually
displayed as `grayed out.` That is, while available menu items 1010
are displayed in a generally prominent text and can be selected
either through, for example, highlighting with a mouse or keypad; a
macro or other combination of key combinations (e.g., Ctrl+N in
Microsoft.RTM. Word results in a new document opening), those items
that are not available (not available menu items 1020) are
generally displayed in a less prominent text (e.g., a light gray
color that still allows for readability but indicates its
unavailability as menu command).
[0135] In applications with a large hierarchy of menu commands or
menu commands with various levels (e.g., File-Open-Folder-File
Name), selecting or executing an available menu command 1010 often
takes up a large amount of screen space due to a multi-level menu
tree or various other menu screens, tabs and so forth. In a device
with limited display space (e.g., a mobile device), such a complex
menu-tree can obfuscate the entire display or, in some instances,
may not be subject to display in any form due to the number of
levels and/or menus and processing or other display limitations of
any particular device.
[0136] Even in applications with generally straight forward menu
displays, a large number of menu commands can cause the menu to
overlap beyond the height of the screen thereby causing the
necessity of utilizing a scroll or elevator bar. While scroll or
elevator bars can artificially provide additional space by
scrolling available menu commands 1010 up and down the screen,
operating such a scroll bar in a limited display area is
disadvantageous in that operating minute display commands, such as
a scroll bar, with generally small operational controls on a mobile
device is more difficult than on a desktop or even a laptop
personal computer.
[0137] Further, the scroll bar will cause certain available menu
commands 1010 to disappear from the screen as available menu
commands 1010 are scrolled up and down by the user. To do so might
cause a particular command of importance or interest to a user to
disappear as they view other available menu commands 1010. Part of
this difficulty is a result of the integration of all menu commands
on the menu, that is, both available menu commands 1010 and not
available menu commands 1020. For example, a particular menu might
comprise ten various commands. Despite the fact that only two of
those commands might be available menu commands 1010 as a result of
the current state of the device or an application, the remaining
eight not available menu commands 1020 will still be displayed
thereby utilizing large amount of screen display space.
[0138] FIG. 10B illustrates a menu 1030 exhibiting intelligent
prioritization of menu commands as governed by their present
availability according to an embodiment of the present invention.
In FIG. 10B, the state of the device or application is the same as
that of the prior art menu as illustrated in FIG. 10A. In FIG. 10B,
however, only available menu commands 1040 are displayed. This
results in savings of space, memory and processing power as, for
example, only two menu commands--New and Open--are displayed
(available menu commands 1040). In such an embodiment of a menu
1030, it would not be necessary to utilize a scroll bar to access
various menu commands as the menu 1030 is reduced in size due to
the non-display of not available menu commands 1020.
[0139] Should the state of the device or application change,
however, those commands that are presently not displayed but
otherwise relevant to the change n device state would then be added
to the list of available menu commands 1040 and displayed on the
menu 1030.
[0140] FIG. 10C illustrates a menu 1050 exhibiting intelligent
prioritization of menu commands as governed by user preference
according to an embodiment of the present invention. In FIG. 10C,
the state of the device is such that a menu would normally, for
example, display ten menu commands if it were a type of a menu as
found in the prior art of FIG. 10A. In FIG. 10C, however, only
preferred available menu commands 1060 are displayed; preferred
available menu commands 1060 in this particular embodiment are not
just those commands capable of execution but those commands capable
of execution and whose display presence is preferred by the user of
the mobile device.
[0141] For example, in menu 1050 the display of ten available menu
commands 1060 would still occupy a large amount of space on most
mobile devices despite the fact that, for example, five additional
commands are not displayed as a result of them being not available.
In this particular embodiment, the mobile device--as a result of
logic contained in, for example, an abstraction layer--will
recognize that of the ten available menu commands, the user of the
mobile device only utilizes three of those menu commands on any
regular basis. The mobile device will then display only those three
menu commands as preferred available menu commands 1060. Those
commands that are not preferred but are otherwise available will
not be displayed 1070.
[0142] This results in a better end user experience through savings
of space, memory and processing power in addition to smoother and
more navigable interfaces as only those available menu commands
actually needed by the user are displayed. In such an embodiment of
a menu, it would not be necessary to utilize a scroll bar to access
various menu commands as the menu 1050 is reduced in size due to
the non-display of not available menu commands as well as available
menu commands that are not preferred by the user.
[0143] Preferred available menu commands 1060 can be those commands
as recognized by the device as being preferred (e.g., in 50
previous uses of a particular menu, only two commands out of ten
were utilized) or can be identified manually be the user. Preferred
available menu commands 1060 can also be set by default by the
manufacturer of a device or agnostic platform. For example, it
might be recognized in the industry that while particular menu
commands might be useful, they are only utilized by a small
percentage of the public utilizing the device. As such, only those
commands used by the general public are displayed when
available.
[0144] Should the state of the device or application change,
however, those commands that are presently not displayed but
preferred would then be added to the list of preferred available
menu commands 1060 and displayed on the menu 1050 with regard to
the state change invoking the availability of certain commands.
[0145] The same intelligence utilized in a menu can also be
utilized with regard to display icons. FIG. 11A illustrates a
device display 1100 wherein limitations as to screen size, pixels
or other factors do not affect the display of a series of icons
1110-1140. These icons 1110-1140 may be for such functions as
telephone, calendar, Internet and contacts.
[0146] FIG. 11B illustrates a device display 1150 wherein certain
limitations, screen-width for example, make it impossible for the
display of four icons of a given size. In this example, the device
may display only those icons that are preferred by the user
1060-1070 such as calendar and telephone. Like the menu displayed
in FIG. 10C, these preferred icons 1060-1070 may be the result of
default preferences, user-input preferences or intelligent decision
making by logic in a device. This logic may be similar to the logic
used in a layout engine as illustrated in FIG. 7A.
[0147] FIG. 12 illustrates a cross-platform event engine 1200 as
utilized in an exemplary embodiment of the present agnostic user
interface. Cross-platform event engine 1200 comprises an event
library 1210 and a logic engine 1220. An embodiment of the
cross-platform event engine 1200 translates a first platform's
events (e.g., key down, up, or center press) into event formats
recognizable by a second platform whereby software code can operate
unaltered on different platforms with different event encoding
and/or event sets. An embodiment of the cross-platform event engine
1200 also ensures the presence and standardization of certain
events (e.g., press-and-hold and key repeats).
[0148] Event library 1210 comprises information as it pertains to
the occurrence of certain events on various devices and/or
platforms. For example, event engine 1210 might be programmed to
understand that by pressing and holding a particular button on a
particular mobile device for a particular period of time (e.g., the
`1` number key for two seconds on a certain device) will result in
the mobile device activating its telephone functionality and
automatically dialing into a voice mail account assigned to that
particular mobile device.
[0149] The information residing in the event library 1210 can be
installed by an original equipment manufacturer or may be subject
to user adjustment (e.g., deactivating default settings and/or
imposing new settings). Information in the event library 1210 may
also be updated automatically during the operation of the device or
configured as the result of intelligent determinations by the
device.
[0150] For example, if the event library 1210 determines that it is
resident on a device for which it does not know what event will be
triggered by the two-second press and hold of the `1` key, the
event library 1210 can make certain assumptions based on a
particular series of a device but not the exact model.
[0151] This assumptive logic is similar to that of the display
engine as described in FIG. 7.
[0152] The event library 1210 can also receive new updates with
regard to device information during a synchronization operation
with a desktop PC or server that hosts other programs related to
the device (e.g., a mail forwarding program that forwards mail from
the desktop to the mobile device). These updates might be
downloaded at the desktop PC or server automatically or as a result
of the user affirmatively downloading an upgrade or patch from the
appropriate provider of that information (e.g., the device
manufacturer or the agnostic interface designer).
[0153] The event library 1210 can also request the user manually
provide this information if an assumption or synchronization
operation fails to provide the necessary information.
[0154] An event input 1230 from the user of the device or a program
running on the device comprises a particular operation that should
result in the activation of a particular application, the display
of certain information or the invocation of some other
functionality particular to the device. For example, the two-second
hold and press of the `1` number key should result in the launch of
voice mail access on particular devices.
[0155] Similarly, this request might be generated by a particular
program as a result of the occurrence of another particular event,
for example, an internal alarm indication (e.g., it is now 8.00 AM)
may result in result in the generation of a text box indicating a
certain event is about to begin and accompanied by an alarm sound
(e.g., a repeated beep). The execution of particular string of code
in a device (e.g., the code for generating the box) may comprise
event input 1230 just as may the activity of the user (e.g., press
and hold of a particular key).
[0156] The event input 1230 need not be of any particular format or
language so long as it may be processed by the cross-platform
engine 1200 with regard to determining whether a particular
application, sub-event, display, sound, etc. should be
executed.
[0157] The cross-platform event engine 1200 also comprises the
aforementioned logic engine 1220. The logic engine 1220, based on
an event input 1230, will query the event library 1210 to determine
if the particular event input 1230 may be processed as requested on
the particular device or if some adjustments will be required with
regard to the particular configuration of the device as set forth
in the event library 1210.
[0158] If the event library 1210 reflects that the event input 1230
can be identified, processed and subsequently executed without
resorting to translation or reconfiguration of the input, the logic
engine 1220 will allow the event input 1230 to result in the
generation of an event instruction 1240.
[0159] For example, the user presses and holds the `1` key for two
seconds (event input 1230). The cross-platform event engine 1200
will accept the event input 1230 and will query the event library
1210 with regard to the event engine 1200 having received this
particular input. The event library 1210 (presuming it to have been
programmed with this particular information) will recognize that on
a particular device, a two-second press and hold of the `1` key is
meant to execute a telephone call to the user's voice mail. The
event library 1210 will communicate the identification of this
operation to the logic engine 1220. The logic engine will then
identify that the device is compatible with that operation thereby
resulting in the generation of an event instruction 1240 that will
cause the activation of a telephone call to the user's voice
mail.
[0160] Event instruction 1240, like the event input 1230, is not of
any particular format or language so long as it may be generated by
the cross-platform event engine 1200 with regard to indicating that
a particular text and/or graphic event may be displayed on the
device or that a particular application or other operation should
be executed in accordance with the event input 1230.
[0161] Should the logic engine's 1220 query of the event library
1210 determine that the requested operation is not immediately
compatible with the present device (e.g., the device does not
utilize a two-second press and hold for voice mail access but a
three-second press and hold), the logic engine 1220 may further
query the event library 1210 to determine what the particular
configuration of the device allows for the identical or similar
operation and, if so, whether the particular event request 1230 can
be converted or translated into a request that will result in the
identical or similar operation.
[0162] For example, if the device recognizes that a two-second
press and hold is being executed, the logic engine 1220, after
having accessed the events library 1210, might recognize that this
particular event is usually associated with voice mail access on
particular devices. The logic engine 1220, in conjunction with the
events library 1210, will determine that while voice mail access is
possible on the present device, access requires the execution of a
three-second press and hold event. The cross platform event engine
1200 will convert the initial request (1230) into the proper
request whereby access to voice mail will occur as the result of an
event instruction 1240.
[0163] Information pertaining to possible translation or conversion
might also reside directly in the logic engine 1220 or at a locale
on the device accessible by the engine 1220. For example,
information pertaining to common events might be cached in the
logic engine 1220 or in memory (not shown) accessible by the logic
engine 1220.
[0164] Similarly, the logic engine 1220, in certain embodiments,
may be trained, whereby the logic engine 1220 begins to recognize a
repeated event input 1230 and without query to the event library
1210 understands that such an event is possible but initiated
through a different process as defined by the event library 1210.
Through the training of the logic engine 1220 and the now absent
need for continued queries to the event library 1210, the
processing speed of an event instruction 1240 is increased.
[0165] The logic engine 1220, in some embodiments, may also be
expressly instructed by the user (e.g., through pre-programming or
a query during processing) to respond to a particular difference in
configuration as identified by the event library 1210 in a
particular manner. For example, if the event input 1230 pertains to
the particular timing of a key press to activate a particular
application, the user might pre-program the logic engine 1220 to
automatically launch that application (e.g., as a default) instead
of querying the events library 1210 and perhaps coming to an
erroneous result as to the particular nature of the event and how
it might be processed in its native environment.
[0166] In that regard, the cross-platform events engine 1200 can
further be configured to recognize that the user of the device is
perhaps most familiar with a particular operating system platform
or mobile device. In that regard, the logic engine 1220, in
conjunction with event library 1210, may consider, in the event
there is a disparity as to what event a user actually seeks to
execute through an event input 1230, those events that relate to
the user's more familiar platform or device prior to considering
any other particular events as they relate to less familiar devices
or platforms.
[0167] For example, a first device might associate a two-second
press and hold with attempting to access voice mail. A second
device might associate a two-second hold with launching an
electronic mail program and wirelessly accessing the Internet. On a
separate device running a cross-events platform 1200, the events
library 1210 will be programmed with information concerning events
as they relate to both devices (e.g., a two-second press and hold
relating to voice mail on the first device and electronic mail on
the second). When an event input 1230 (two-second button hold) is
received by the cross-platform event engine 1200, the logic engine
1220 will query the event library 1210 and recognize that such an
input 1230 can have a differing result between devices. Having been
previously programmed to note that the user formerly was a `first
device` user, however, the logic engine 1220 will elect to convert
the input 1230 to an input compatible with voice mail access
thereby resulting in voice mail access (through instruction 1240)
rather than electronic mail and Internet access as would be
appropriate had the user been a former `second device` user.
[0168] An embodiment of the cross-platform event engine 1200 also
allows for cross-platform representation of strings and other
executables.
[0169] Like the layout engine in FIG. 7B, the event library 1210 of
the cross-platform event engine 1200 may be integrated with the
abstraction layer 420 of the virtual platform 400 that allows for
the interoperability of a particular user interface on any variety
of devices and/or platforms. This integration may also include
integration with the layout engine 700. While the cross-platform
event engine 1200 and virtual platform 400 need not necessarily be
physically integrated, the agnostic user interface of the present
invention requires that the two components at least be capable of
communicating with one another as to allow for the translation of
what might be a foreign instruction into an instruction otherwise
comprehensible by the cross-platform event engine 1200.
[0170] While some mobile devices now offer a full QWERTY keyboard
on the device to allow for text entry for the purposes of, for
example, generating electronic mail or updating a contact database,
these keyboards take up a large amount of space and can cause a
mobile device to be too wide or too large for a particular user's
requirements. As such, a number of mobile devices utilize what is
known as triple-tap (sometimes referred to as triple-press) text
entry. FIG. 13A illustrates a portion of a keypad 1300 as might be
utilized in triple-tap text entry on a mobile device as is known in
the prior art.
[0171] In a triple-tap device, the keypad is that of a telephone
keypad with groups of letters in alphabetical order and associated
with particular number keys. For example, number key 2 1310 in FIG.
13A is associated with letters A, B and C. Similarly, number key 3
1320 in FIG. 13A is associated with letters D, E and F.
[0172] To enter text on a device utilizing a triple-tap text entry,
the user taps the key the number of times corresponding to the
position of the letter in the standard ordering. For example, to
enter the letter "A," the user presses the `2` key once; to enter
the letter "B," the user presses the `2` key twice (A-B); to enter
the letter "C," the user presses the `2` key three-times
(A-B-C).
[0173] The difficulty with triple-tap is that the device or
platform operating the triple-tap text entry method must deal with
segmentation issues, that is, when two characters that are mapped
to the same key are entered consecutively (e.g., A and B on number
key two 1310 as in the word absent or A and A on the number key two
1310 as in the word aardvark). The issue becomes when does the
first `tap` series end and the second `tap` series begin.
[0174] The typical solution to segmentation is generally known as
the `timeout approach.` Using timeout, a device determines when a
user has finished cycling through characters on a particular key. A
preset timeout period, usually one to two seconds, must elapse
before another character can be entered on the same key. If the
user enters another character on the same key before the timeout
period has elapsed, the current character is overwritten with the
next character in order.
[0175] Using the previous example, to enter the word `absent,` the
user would tap the `2` key 1310 once to enter the letter `A.` The
user must then let the timeout period of two seconds expire before
tapping the `2` key 1310 two more times in order to enter the
letter `B.` If the user taps the `2` key 1310 even once before the
expiration of the timeout period, the initial `A` will be
overwritten by a `B.`
[0176] Disambiguating input, that is, removing ambiguities (e.g.,
when does a first `tap` series begin and a second `tap` series end)
from a keypad using triple-tap can often be tedious and
inefficient. An alternative to triple-tap text entry is T9.RTM.
predictive text input as offered by Tegic Communications, Inc. FIG.
13B illustrates a mobile device 1330 utilizing the T9 text-entry
methodology as is known in the prior art.
[0177] T9.RTM. text-entry incorporates linguistic knowledge, in the
form of a dynamic dictionary and word probabilities, to perform
disambiguation. Using T9.RTM. text-entry, a word is defined as any
sequence of key presses. Generally, the space key (or the zero key)
is used to delineate words and terminate disambiguation. For a
given sequence of key presses, the system retrieves a list of words
from its dictionary that could be entered with that sequence. The
list is then ordered in descending order of word probabilities and
the most probable word is presented to the user initially or, in
some embodiments, automatically entered into the string of text. If
the initial prediction is incorrect, the user can scroll through
the list of predicted words to try and find the correct word.
[0178] Despite the predictive intelligence of a T9.RTM.-type
system, some mobile devices will still rely on triple-tap entry in
conjunction with T9.RTM. for actual entry of text. That is, a user
will still utilize a telephone-type keypad as evidenced in FIG. 13A
in conjunction will the predictive analysis of T9.RTM. as shown in
FIG. 13B. As such, the segmentation issue still remains.
[0179] Normally, a user will become accustomed to the timeout
period of a particular device and learn to adjust to that period
when entering text. Some systems might allow for the timeout period
to be manually set or adjusted by the user. For example, if a user
happens to enter text at a slow-pace, the timeout period can be
extended from two-seconds to three- to four-seconds or longer
depending on the user's particular preferences. Likewise, the
timeout period can be shortened if a user is a particularly fast
typist. In other embodiments, the timeout period can be the result
of training (e.g., observing key press rates over the course of
1,000 key presses).
[0180] The difficulty arises, however, when a user switches from
one device or platform to another device or platform. The timeout
period of a first device may not (and often is not) the same as the
second device. As such, the user will often experience a great deal
of difficulty and delay with regard to entering text into a device
using triple-tap or triple-tap and T9.RTM. in combination on the
new device with a foreign timeout period.
[0181] Using an agnostic interface, however, difficulties
encountered with a particular timeout period can be overcome
whereby a timeout period is made agnostic across various platforms.
Certain embodiments of the present invention provide users with an
interface experience on one particular device (e.g., triple-tap,
T9.RTM., eZi mode) while utilizing a different device.
[0182] In an embodiment of the present invention, as illustrated in
FIG. 14A, an on-screen text box 1410 in a mobile device 1400 is
synchronized with an off-screen text buffer 1420.
[0183] With every key press, the off-screen text buffer 1420 is
populated with indicia of that particular key press. For example,
if the user presses the `2` key, the off-screen buffer 1420 is
populated or `strobed` with an indication that the `2` key has been
pressed once (e.g., 2). If the user presses the `2` key again, the
off-screen buffer 1420 is cleared and re-populated or `strobed`
with an indicator of the `2` key having been pressed twice (e.g.,
2, 2). Should the user then press the `2` key a third-time, the
off-screen buffer 1420 is again cleared and re-populated or
`strobed` with an indicator of the `2` key having now been pressed
three times (e.g., 2, 2, 2). If, at any time, a particular key is
not repeated (e.g., the `2` key) the off-screen buffer 1420
synchronizes with the on-screen text box 1410 and populates the
on-screen text box 1420 as appropriate. That is, the off-screen
text box 1420 will effectively `carriage right` by causing
population of the on-screen text box 1410 with the appropriate
textual character.
[0184] For example, if the `2` key is pressed once and then
followed by the `3` key, the off-screen text buffer 1420 will
populate the on-screen text box 1410 with the letter `A` and,
eventually, `D.` If the user presses the `2` key twice, followed by
the `3` key twice, followed by the `6` key twice, then the
on-screen text box 1410 will become populated with the letters `B`
and `E` and, eventually, `N.`
[0185] Using this off-screen and on-screen synchronization
methodology, the timeout function becomes unnecessary. The device,
through using the off-screen text buffer 1420, simply need refer to
the ongoing string of text in determining whether population of the
on-screen text box 1410 is appropriate. If a key other than the
initial key is pressed (e.g., `2` followed by `3`), then a
right-carriage function is appropriate (e.g., move to the next
character in the word) and the on-screen text box 1410 should be
populated. If the same key as the initial key is pressed (e.g., `2`
followed by `2`), then there exists the possibility of a third
repetition of `2` and population of the on-screen text box 1410
should be delayed until a right-carriage function is confirmed
(i.e., the entry of a new key).
[0186] FIG. 14B further illustrates a string of key press entries
1430 as might be found in an off-screen text buffer 1420 as
described in FIG. 14A. In the present figure, the user has
initially key pressed `7`. Number key `7` can be associated with
the letters `P,"Q,"R` and `S` in addition to the number `7.`
[0187] In a traditional triple-tap system, the user would press the
`7` key and the letter `P` (being the first textual character
associated with the `7` key) would immediately appear on the
display. Immediately after pressing the `7` key, the timeout
function would begin wherein an internal timer would begin
calculating the expiration of the timeout period. If they `7` key
were pressed again during the timeout period, the previously
displayed `P` would be immediately converted to the letter `Q`
(being the second textual character associated with the `7` key).
The timeout period would then re-commence as the user could still
desire to enter the letters `R` or `S` in addition to the number
`7.`
[0188] If the user pressed the `7` key after the expiration of the
timeout period and following the initial display of the letter `P,`
however, the letter `P` would remain on the display and a second
letter `P` would appear and the timeout sequence as described above
would now commence for this second textual character to be
displayed. Proper display of text is dependent upon the user
properly making key presses during or outside of the timeout
period.
[0189] Using the off-screen text buffer 1420, however, the device
instead relies on the string of key press entries 1430 versus the
timeout period. For example, the user has key pressed the
aforementioned `7` as well as `3,` `8,` `3` and `6.` As none of the
key presses involve a repeat of a key press (e.g., `3` followed by
`3` thereby indicating the letter `E`), it is determined that each
key press is meant to render the display of the first textual
character associated with each of these number keys (i.e., S E V E
N). The `0` (space) key is followed by `6,` `3,` `8,` `9,` `6,`
`7,` `5` and `7.` Again, as there are no repeat key presses, it is
determined that the textual display should accord with the first
textual character associated with each key (i.e., N E T W O R K
S).
[0190] FIG. 15 illustrates a method 1500 for utilizing the
off-screen text buffer 1420 as described in FIGS. 14A and 14B. In
step 1510, the off-screen text buffer (1420) is cleared and made
ready for the entry of a first key press indicator. In step 1520,
the user makes a key press and an indicator of that key press
(e.g., `7`) is entered into the off-screen text buffer (1420;
1430).
[0191] The user will then make a subsequent key press 1530 (e.g.,
`3`) and the device then determines 1540 if the subsequent key
press 1530 is a repeat character as compared to the previously
entered character (e.g., `3`-`3`). If the subsequent key press 1530
is a repeat character, the off-screen text buffer is cleared 1550
and the off-screen text buffer is re-populated 1560 with all
previous indicators plus the most recently entered subsequent key
press indicator. The device will then await the entry of a
subsequent key press 1530.
[0192] If the subsequent key press 1530 is not a repeat character
(e.g., `7`-`3`-`8`), then the device will populate 1570 the
on-screen text box 1410 with the appropriate textual characters
thus far entered (e.g., `S` and `E,` to be followed, presumably, by
the letter `V`). The device will then continue with the entry of a
particular word 1580 in a similar fashion.
[0193] With the use of T9.RTM., a similar system of on-screen text
box 1410 and off-screen text buffer 1420 and method 1500 are
utilized. With T9.RTM., however, the context of a string of key
presses (e.g., string 1430) is parsed to make intelligent language
determinations. The length of the context (e.g., the length of the
string) can be set as a matter of default or by the user of the
device or based upon particular manufacturer setting determined by
the particular parser utilized in a device. For example, context
may be one word, one sentence or even a full paragraph dependent of
the intelligence of the T9.RTM.-type system used.
[0194] An embodiment of the present agnostic interface may also
allow for pictorial-language entry, for example, Japanese-language
entry in Kanji. Through an optional translation language engine,
for example, a particular entry in Kanji in the on-screen text box
1410 can be associated with a character entry in the off-screen
text buffer 1420. A translation engine may be embodied in an
abstract layer 420 and/or through a rules engine or information
library or module.
[0195] FIG. 16 illustrates an exemplary method 1600 for utilizing a
layout engine to display graphics and/or text in an embodiment of
the present invention.
[0196] A request 1610 will first be made of the layout engine to
render certain text or graphics information. In accordance with
this request 1610, a query will be made to the rules engine in step
1620 to obtain information as to whether or not the particular
device or platform can process the request 1610 as initially
submitted to the layout engine dependent upon certain constraints
of the particular device or platform. The layout engine will then
make that determination in step 1630 as to whether the constraints
of the device allow processing of the request 1610.
[0197] If the request 1610 can be processed, the layout engine will
allow display of the requested graphics and/or text 1640 through
the generation of an output instruction as discussed in the context
of FIG. 7A. If the request 1610 cannot be processed, the layout
engine will further determine the constraints of the particular
device in step 1650; that is, what aspect of the present device is
preventing the display of the text and/or graphics information as
submitted through request 1610. The layout engine, in step 1660,
will then determine the scope of the original request and how to
best display the requested text and/or graphics information while
staying within the scope of that request (e.g., rendering a graphic
in red as opposed to the requested maroon). The layout engine,
through an output instruction, will then render graphics and/or
text information in step 1670 that corresponds to the constraints
of the particular device and the original scope of the request
1610.
[0198] During step 1620, if the query to the rules engine results
in no determination of particular limitations of the device, an
optional step 1680 with regard to obtaining that information can be
made before repeating the query to the rules engine as described in
step 1620. This optional information obtaining step 1680 can be the
result of a synchronization operation, a software update, manual
input or an intelligent assumption as made by the layout
engine.
[0199] FIG. 17 illustrates an exemplary method 1700 for utilizing a
cross-platform events engine to execute cross-platform events in a
native environment in an exemplary embodiment of the present
invention.
[0200] In step 1710, an event as input by the user or generated by
a device is received by the cross-platform events engine. In step
1720, the events engine will query the events library to determine
the nature of the request (e.g., what application should be
executed as a result of a two-second button press?).
[0201] If the event request received 1710 by the engine can be
processed on the particular device--as determined in step 1730--the
engine will allow for execution 1740 of the particular application
or occurrence of other actions that result from the initially
received event request.
[0202] If the event request received 1710 by the engine cannot be
processed on the particular device, the engine will further query
the library to determine why the event cannot be processed in step
1750. For example, is an action associated with a two-second button
press, is there a multitude of actions associated with a two-second
button press or is an illegal operation associated with a
two-second button press (e.g., the associated event is the launch
of a non-present application). In the case of a multitude of
operations being associated with the event received in step 1710,
the events library might be programmed to note that the user of the
device is a former user of a particular device as would be
discovered during optional query step 1790. In such a situation,
the events engine would associate that former device with the
received event 1710 before associating it with a second or any
other event.
[0203] In step 1760, the events engine will, having determined the
operational limits of the device in step 1750, determine whether
the initial event request received in step 1710 can be converted to
a request that can be processed by the device (e.g., a two-second
hold for voice mail on one device is equivalent to a three-second
hold on the present device thereby requiring the conversion of
information related to a two-second hold as information related to
a three-second hold).
[0204] If the initial request received in step 1710 cannot be
converted, the device will return an illegal operation error
notification in step 1770. The device will then query the user
whether they wish to allocate a particular action or result with
the initially received event request in step 1775. For example, the
present device may not have any action associated with a two-second
button press but will allow the user to assign one. Having assigned
an action to the event in step 1775, the device may then
re-initiate the sequence be providing this new information to the
event library and initiating query step 1720.
[0205] If the initial request received in step 1710 can be
converted, conversion of that request will occur in step 1780.
Various techniques are known in the art for converting one
informational format to another. For example, transcoding or
`spoofing` one request as another through providing an alias or
`wrapper` around the actual request while presenting it as the
aliased request. Following conversion of the request in step 1780,
the event or action associated with the request will be executed in
step 1785.
[0206] FIGS. 18A and 18B illustrate the management of information
displayed on a mobile device 1800 using five-way navigation. Shown
on the display of mobile device 1800 are a series of electronic
mail messages 1810 as might be displayed in a mailbox feature on a
mobile device.
[0207] In order to manage electronic mail messages, for example, it
is necessary to move a highlight bar to a particular message, open
the message, enter a delete command either manually, through a
drop-down menu or through icon selection, and finally confirm
deletion of a message before the message is finally removed from a
mobile device's mailbox. In systems where a mobile device is
synchronized with a desktop mailbox (e.g., Microsoft.RTM. Outlook),
an additional confirmation is often required as to whether the user
wishes to delete the message only on the mobile device, only on the
desktop or on both the handheld and the desktop. The process is
then repeated for each message to be deleted. For a user that
receives a large number of electronic mail messages on their mobile
device, this can be extremely tedious and time consuming in
addition to wasting battery and processing resources.
[0208] FIG. 18B illustrates the use of a five-way navigation
control 1820 to manage information such as electronic mail. Using
the five-way navigation control 1820 allows a user to move an icon,
cursor or other indicator on a display up, down, left, and right in
addition to a confirmation or `down click` feature wherein the user
presses down on the center of the navigation tool in an action
sometimes equivalent to the pressing of the carriage return key on
a keyboard. Five-way navigation allows a user to operate various
functionalities of a mobile device with one hand and without the
use of, for example, a stylus.
[0209] In FIG. 18B, as in FIG. 18A, a list of five electronic mails
is presented. Should the user wish to delete two of those
electronic mails (1830 and 1840), using traditional management
methods would require the user to highlight the first message
(1830), open the message, enter a delete command either manually,
through a drop-down menu or through icon selection and then confirm
deletion of the message. Using an exemplary five-way navigation
technique, the user can navigate down the message to be deleted
(1830) by pressing down on the navigation tool 1820 and then
pressing the navigation tool 1820 to the right and then down
clicking whereby the message is then highlighted and selected for
further action, in this instance, deletion.
[0210] The user can then press the navigation tool 1820 down two
more times to arrive at a second message to be deleted (1840). The
user can then highlight the message for deletion as in the instance
of message 1820. The user can then, at an appropriate time, select
a `delete all` command wherein all highlighted messages are then
deleted.
[0211] Using five-way navigation is not limited to deletion of
messages. A user could also select files to review (e.g., where the
user has access to desktop files) or could also manage files or
messages to be placed in particular mobile device folders for
organization using similar navigation and highlighting techniques.
Similarly, a user could select various contacts in a directory to
electronically `beam` (e.g., through a Bluetooth.RTM. or infrared
transmission) to another user.
[0212] The above-described embodiments are exemplary. For example,
the present agnostic interface also allows for building various
applications (e.g., gaming applications) across various platforms
and devices. One skilled in the art will recognize and appreciate
various applications of the disclosed invention beyond those
presently described here. This disclosure is not meant to be
limiting beyond those limitations as expressly provided in the
claims.
* * * * *