U.S. patent application number 13/223015 was filed with the patent office on 2012-04-05 for launching applications into revealed desktop.
This patent application is currently assigned to IMERJ LLC. Invention is credited to Martin Gimpl, Sanjiv Sirpal.
Application Number | 20120081268 13/223015 |
Document ID | / |
Family ID | 45889332 |
Filed Date | 2012-04-05 |
United States Patent
Application |
20120081268 |
Kind Code |
A1 |
Sirpal; Sanjiv ; et
al. |
April 5, 2012 |
LAUNCHING APPLICATIONS INTO REVEALED DESKTOP
Abstract
A dual-screen user device and methods for launching applications
from a revealed desktop onto a logically chosen screen are
disclosed. Specifically, a user reveals the desktop and then
launches a selected application from one of two desktops displayed
on a primary and secondary screen of a device. When the application
is launched, it is displayed onto a specific screen depending on
the input received and the logical rules determining the display
output. As the application is displayed onto the specific screen,
the desktop is removed from display and the opposite screen can
display other data.
Inventors: |
Sirpal; Sanjiv; (Oakville,
CA) ; Gimpl; Martin; (Helsinki, FI) |
Assignee: |
IMERJ LLC
Broomfield
CO
|
Family ID: |
45889332 |
Appl. No.: |
13/223015 |
Filed: |
August 31, 2011 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61389000 |
Oct 1, 2010 |
|
|
|
61389117 |
Oct 1, 2010 |
|
|
|
61389087 |
Oct 1, 2010 |
|
|
|
Current U.S.
Class: |
345/1.1 |
Current CPC
Class: |
G06F 3/0482 20130101;
G06F 3/04817 20130101; G06F 1/1647 20130101; G06F 3/04883 20130101;
G06F 3/04886 20130101; G06F 3/0412 20130101; G06F 3/0486 20130101;
G06F 3/1423 20130101; G06F 3/04845 20130101; G06F 3/0416 20130101;
G06F 1/1641 20130101; G06F 3/0483 20130101; G06F 1/1616 20130101;
G06F 3/04847 20130101; G06F 3/017 20130101; G06F 3/0481 20130101;
G06F 3/04842 20130101; G06F 3/0488 20130101 |
Class at
Publication: |
345/1.1 |
International
Class: |
G09G 5/00 20060101
G09G005/00 |
Claims
1. A method of displaying an application onto a desktop on a
multi-screen device, comprising: displaying a first application on
a first screen of the multi-screen device; receiving a first
predetermined input to display a desktop on the first screen and
the second screen of the multi-screen device; responding to the
first predetermined input with an output that causes the desktop to
be displayed on the first screen and the second screen of the
multi-screen device; receiving a second predetermined input on the
first screen that represents an instruction to display a second
application on the first screen of the multi-screen device;
responding to the second predetermined input with an output that
causes the second application to be displayed on the first screen
and which also causes the first application to be displayed on the
second screen of the multi-screen device.
2. The method of claim 1, wherein in the first application is a
multi-screen application and is displayed simultaneously on the
first and second screen.
3. The method of claim 2, wherein in response to the first
predetermined input the multi-screen application is stored in a
virtual stack, the method further comprising: registering, in
memory, the first application for display by the first screen and
the second screen; preventing the first application from displaying
on the first screen and the second screen where the first desktop
screen and second desktop screen is displayed; and wherein access
to the memory is provided such that the first application can be
retrieved.
4. The method of claim 3, wherein the virtual stack is split into a
left stack and a right stack.
5. The method of claim 4, wherein the application is split into a
first and second part where the first part is stored in a first
stack and the second part is stored in a second stack; and wherein
the first stack represents data displayed on the first screen and
the second stack represents data displayed on the second screen;
and wherein the first stack is stored in the left stack and the
second stack is stored in the right stack.
6. The method of claim 1, wherein in response to the second
predetermined input the first application is displayed on the
second screen in a single-page format.
7. The method of claim 1, wherein in the first and second
applications are the same.
8. The method of claim 1, wherein prior to receiving a first
predetermined input to display a desktop on the first screen and
the second screen of the multi-screen device, the multi-screen
device is displaying the first application on the first screen and
a third application on the second screen; and wherein responding to
the second predetermined input with an output that causes the
second application to be displayed on the first screen also causes
the determination of at least one of 1) the first application to be
displayed on the second screen, and 2) the third application to be
displayed on the second screen.
9. The method of claim 8, wherein in the determination of
displaying a particular application involves utilizing at least one
of 1) application priority information; 2) the application
sequence; 3) a logical display algorithm based on predefined
application criteria.
10. A non-transitory computer-readable medium having stored thereon
instructions that cause a computing system to execute a method, the
instructions comprising: instructions configured to display a first
application on a first screen of a multi-screen device;
instructions configured to receive a first predetermined input that
represents an instruction to reveal a desktop on the first screen
and the second screen of the multi-screen device; instructions
configured to respond to the first predetermined input with an
output that causes the desktop to be displayed on the first screen
and the second screen of the multi-screen device; instructions
configured to receive a second predetermined input that represents
an instruction to display a second application on the first screen
of the multi-screen device; instructions configured to respond to
the second predetermined input with an output that causes the
second application to be displayed on the first screen and which
also causes the first application to be displayed on the second
screen of the multi-screen device.
11. The computer-readable medium of claim 10, wherein the first
screen corresponds to a primary screen and wherein the second
screen corresponds to a secondary screen.
12. The computer-readable medium of claim 10, wherein in the first
application is a multi-screen application and is displayed
simultaneously on the first and second screen.
13. The method of claim 10, wherein in response to the first
predetermined input the multi-screen application is stored in a
virtual stack, the method further comprising: registering, in
memory, the first application for display by the first screen and
the second screen; preventing the first application from displaying
on the first screen and the second screen where the first desktop
screen and second desktop screen is displayed; and wherein access
to the memory is provided such that the first application can be
retrieved.
14. The computer-readable medium of claim 13, wherein the virtual
stack is split into a left stack and a right stack.
15. The method of claim 14, wherein the application is split into a
first and second part where the first part is stored in a first
stack and the second part is stored in a second stack; and wherein
the first stack represents data displayed on the first screen and
the second stack represents data displayed on the second screen;
and wherein the first stack is stored in the left stack and the
second stack is stored in the right stack.
16. The computer-readable medium of claim 10, wherein in response
to the second predetermined input the first application is
displayed on the second screen in a single-page format.
17. The computer-readable medium of claim 10, wherein the first and
second applications are the same.
18. The method of claim 10, wherein prior to receiving a first
predetermined input to display a desktop on the first screen and
the second screen of the multi-screen device, the multi-screen
device is displaying the first application on the first screen and
a third application on the second screen; and wherein responding to
the second predetermined input with an output that causes the
second application to be displayed on the first screen also causes
the determination of at least one of 1) the first application to be
displayed on the second screen, and 2) the third application to be
displayed on the second screen.
19. The method of claim 18, wherein in the determination of
displaying a particular application involves utilizing at least one
of 1) application priority information; 2) the application
sequence; and 3) a logical display algorithm based on predefined
application criteria.
20. A dual-screen user device, comprising: a first screen including
a first display area; a second screen including a second display
area; a first user input gesture area configured to receive a first
signal indicative of a position on the first screen; a second user
input gesture area configured to receive a second signal indicative
of a position on the second screen; and a computer-readable medium
having instructions stored thereon that include: a first set of
instructions configured to determine a received signal based on the
input origin of a first or second signal; and a second set of
instructions configured to determine whether the received signal
corresponds to the position of a known application displayed on the
first or second display; and a third set of instructions configured
to launch the known application and display it to the screen from
where the received signal originated.
Description
CROSS REFERENCE TO RELATED APPLICATION
[0001] The present application claims the benefits of and priority,
under 35 U.S.C. .sctn.119(e), to U.S. Provisional Application Ser.
Nos. 61/389,000, filed Oct. 1, 2010, entitled "DUAL DISPLAY
WINDOWING SYSTEM;" 61/389,117, filed Oct. 1, 2010, entitled
"MULTI.andgate.OPERATING SYSTEM PORTABLE DOCKETING DEVICE;"
61/389,087, filed Oct. 1, 2010, entitled "TABLET COMPUTING USER
INTERFACE." Each of the aforementioned documents is incorporated
herein by this reference in their entirety for all that they teach
and for all purposes.
BACKGROUND
[0002] A substantial number of handheld computing devices, such as
cellular phones, tablets, and E-Readers, make use of a touch screen
display not only to deliver display information to the user but
also to receive inputs from user interface commands. While touch
screen displays may increase the configurability of the handheld
device and provide a wide variety of user interface options, this
flexibility typically comes at a price. The dual use of the touch
screen to provide content and receive user commands, while flexible
for the user, may obfuscate the display and cause visual clutter,
thereby leading to user frustration and loss of productivity.
[0003] The small form factor of handheld computing devices requires
a careful balancing between the displayed graphics and the area
provided for receiving inputs. On the one hand, the small display
constrains the display space, which may increase the difficulty of
interpreting actions or results. On the other, a virtual keypad or
other user interface scheme is superimposed on or positioned
adjacent to an executing application, requiring the application to
be squeezed into an even smaller portion of the display.
[0004] This balancing act is particularly difficult for single
display touch screen devices. Single display touch screen devices
are crippled by their limited screen space. When users are entering
information into the device, through the single display, the
ability to interpret information in the display can be severely
hampered, particularly when a complex interaction between display
and interface is required.
SUMMARY
[0005] There is a need for a multi-display handheld computing
device that provides for enhanced power and/or versatility compared
to conventional single display handheld computing devices. These
and other needs are addressed by the various aspects, embodiments,
and/or configurations of the present disclosure. Also, while the
disclosure is presented in terms of exemplary embodiments, it
should be appreciated that individual aspects of the disclosure can
be separately claimed.
[0006] A method of displaying an application onto a desktop on a
multi-screen device, comprising:
[0007] displaying a first application on a first screen of the
multi-screen device;
[0008] receiving a first predetermined input to display a desktop
on the first screen and the second screen of the multi-screen
device;
[0009] responding to the first predetermined input with an output
that causes the desktop to be displayed on the first screen and the
second screen of the multi-screen device;
[0010] receiving a second predetermined input on the first screen
that represents an instruction to display a second application on
the first screen of the multi-screen device;
[0011] responding to the second predetermined input with an output
that causes the second application to be displayed on the first
screen and which also causes the first application to be displayed
on the second screen of the multi-screen device.
[0012] A non-transitory computer-readable medium having stored
thereon instructions that cause a computing system to execute a
method, the instructions comprising: [0013] instructions configured
to display a first application on a first screen of a multi-screen
device;
[0014] instructions configured to receive a first predetermined
input that represents an instruction to reveal a desktop on the
first screen and the second screen of the multi-screen device;
[0015] instructions configured to respond to the first
predetermined input with an output that causes the desktop to be
displayed on the first screen and the second screen of the
multi-screen device;
[0016] instructions configured to receive a second predetermined
input that represents an instruction to display a second
application on the first screen of the multi-screen device;
[0017] instructions configured to respond to the second
predetermined input with an output that causes the second
application to be displayed on the first screen and which also
causes the first application to be displayed on the second screen
of the multi-screen device.
[0018] A dual-screen user device, comprising:
[0019] a first screen including a first display area;
[0020] a second screen including a second display area;
[0021] a first user input gesture area configured to receive a
first signal indicative of a position on the first screen;
[0022] a second user input gesture area configured to receive a
second signal indicative of a position on the second screen;
and
[0023] a computer-readable medium having instructions stored
thereon that include: [0024] a first set of instructions configured
to determine a received signal based on the input origin of a first
or second signal; and [0025] a second set of instructions
configured to determine whether the received signal corresponds to
the position of a known application displayed on the first or
second display; and [0026] a third set of instructions configured
to launch the known application and display it to the screen from
where the received signal originated.
[0027] The present disclosure can provide a number of advantages
depending on the particular aspect, embodiment, and/or
configuration. Currently, the consumer electronics industry is
dominated by single-screen devices. Unfortunately, these devices
are limited in the manner in which they can efficiently display
information and receive user input. Specifically, multiple
applications and desktops cannot be adequately shown on a single
screen and require the user to constantly switch between displayed
pages to access content from more than one application.
Additionally, user input devices such as keyboards, touch-sensitive
or capacitive displays, and hardware interface buttons are usually
reduced in size to fit onto a single-screen device. Manipulating
this type of device, and being forced to switch between multiple
applications that only use one screen results in user fatigue,
frustration, and in some cases repetitive motion injuries.
[0028] Recently, dual-screen devices have been made available to
consumers of electronic devices. However, the currently available
dual-screen devices have failed to adequately address the needs of
the consumer. Although the devices include two screens in their
design, they tend to incorporate the negative limitations of their
single-screen counterparts. In particular, the typical dual-screen
device limits the user interface to a particular screen, in some
cases only providing a keyboard, or touch-sensitive/capacitive
display, on one of the screens. Moreover, the management of the
device's applications and desktops is limited to the traditional
concepts of single-screen content switching. The present disclosure
addresses the limitations of the traditional single/dual-screen
devices and provides advantages in display, input, and content
management.
[0029] At least one embodiment of the present disclosure describes
a multi-screen device and methods for managing the display of
content that allows the user a greater degree of creative latitude
when operating the device. In particular, when a device is running
an application or group of applications, the device is capable of
detecting a user gesture input that can reveal a desktop on
multiple screens of the device. This desktop can show a
representation of different applications that the user can select.
From this desktop, a user launches an application onto a specific
screen by choosing the application from the desired launch screen.
While, or after, the application displays on the chosen screen, one
of the previously displayed applications is displayed on the
non-chosen screen. The management of the displayed applications can
be directed by the device or the user. These and other advantages
will be apparent from the disclosure.
[0030] The phrases "at least one", "one or more", and "and/or" are
open-ended expressions that are both conjunctive and disjunctive in
operation. For example, each of the expressions "at least one of A,
B and C", "at least one of A, B, or C", "one or more of A, B, and
C", "one or more of A, B, or C" and "A, B, and/or C" means A alone,
B alone, C alone, A and B together, A and C together, B and C
together, or A, B and C together.
[0031] The term "a" or "an" entity refers to one or more of that
entity. As such, the terms "a" (or "an"), "one or more" and "at
least one" can be used interchangeably herein. It is also to be
noted that the terms "comprising", "including", and "having" can be
used interchangeably.
[0032] The term "automatic" and variations thereof, as used herein,
refers to any process or operation done without material human
input when the process or operation is performed. However, a
process or operation can be automatic, even though performance of
the process or operation uses material or immaterial human input,
if the input is received before performance of the process or
operation. Human input is deemed to be material if such input
influences how the process or operation will be performed. Human
input that consents to the performance of the process or operation
is not deemed to be "material".
[0033] The term "computer-readable medium" as used herein refers to
any tangible storage and/or transmission medium that participate in
providing instructions to a processor for execution. Such a medium
may take many forms, including but not limited to, non-volatile
media, volatile media, and transmission media. Non-volatile media
includes, for example, NVRAM, or magnetic or optical disks.
Volatile media includes dynamic memory, such as main memory. Common
forms of computer-readable media include, for example, a floppy
disk, a flexible disk, hard disk, magnetic tape, or any other
magnetic medium, magneto-optical medium, a CD-ROM, any other
optical medium, punch cards, paper tape, any other physical medium
with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, a
solid state medium like a memory card, any other memory chip or
cartridge, a carrier wave as described hereinafter, or any other
medium from which a computer can read. A digital file attachment to
e-mail or other self-contained information archive or set of
archives is considered a distribution medium equivalent to a
tangible storage medium. When the computer-readable media is
configured as a database, it is to be understood that the database
may be any type of database, such as relational, hierarchical,
object-oriented, and/or the like. Accordingly, the disclosure is
considered to include a tangible storage medium or distribution
medium and prior art-recognized equivalents and successor media, in
which the software implementations of the present disclosure are
stored.
[0034] The term "desktop" refers to a metaphor used to portray
systems. A desktop is generally considered a "surface" that
typically includes pictures, called icons, widgets, folders, etc.
that can activate show applications, windows, cabinets, files,
folders, documents, and other graphical items. The icons are
generally selectable to initiate an task through user interface
interaction to allow a user to execute applications or conduct
other operations.
[0035] The term "display" refers to a portion of a screen used to
display the output of a computer to a user.
[0036] The term "displayed image" refers to an image produced on
the display. A typical displayed image is a window or desktop. The
displayed image may occupy all or a portion of the display.
[0037] The term "display orientation" refers to the way in which a
rectangular display is oriented by a user for viewing. The two most
common types of display orientation are portrait and landscape. In
landscape mode, the display is oriented such that the width of the
display is greater than the height of the display (such as a 4:3
ratio, which is 4 units wide and 3 units tall, or a 16:9 ratio,
which is 16 units wide and 9 units tall). Stated differently, the
longer dimension of the display is oriented substantially
horizontal in landscape mode while the shorter dimension of the
display is oriented substantially vertical. In the portrait mode,
by contrast, the display is oriented such that the width of the
display is less than the height of the display. Stated differently,
the shorter dimension of the display is oriented substantially
horizontal in the portrait mode while the longer dimension of the
display is oriented substantially vertical. The multi-screen
display can have one composite display that encompasses all the
screens. The composite display can have different display
characteristics based on the various orientations of the
device.
[0038] The term "gesture" refers to a user action that expresses an
intended idea, action, meaning, result, and/or outcome. The user
action can include manipulating a device (e.g., opening or closing
a device, changing a device orientation, moving a trackball or
wheel, etc.), movement of a body part in relation to the device,
movement of an implement or tool in relation to the device, audio
inputs, etc. A gesture may be made on a device (such as on the
screen) or with the device to interact with the device.
[0039] The term "module" as used herein refers to any known or
later developed hardware, software, firmware, artificial
intelligence, fuzzy logic, or combination of hardware and software
that is capable of performing the functionality associated with
that element.
[0040] The term "gesture capture" refers to a sense or otherwise a
detection of an instance and/or type of user gesture. The gesture
capture can occur in one or more areas of the screen, A gesture
region can be on the display, where it may be referred to as a
touch sensitive display or off the display where it may be referred
to as a gesture capture area.
[0041] A "multi-screen application" refers to an application that
is capable of producing one or more windows that may simultaneously
occupy multiple screens. A multi-screen application commonly can
operate in single-screen mode in which one or more windows of the
application are displayed only on one screen or in multi-screen
mode in which one or more windows are displayed simultaneously on
multiple screens.
[0042] A "single-screen application" refers to an application that
is capable of producing one or more windows that may occupy only a
single screen at a time.
[0043] The term "screen," "touch screen," or "touchscreen" refers
to a physical structure that enables the user to interact with the
computer by touching areas on the screen and provides information
to a user through a display. The touch screen may sense user
contact in a number of different ways, such as by a change in an
electrical parameter (e.g., resistance or capacitance), acoustic
wave variations, infrared radiation proximity detection, light
variation detection, and the like. In a resistive touch screen, for
example, normally separated conductive and resistive metallic
layers in the screen pass an electrical current. When a user
touches the screen, the two layers make contact in the contacted
location, whereby a change in electrical field is noted and the
coordinates of the contacted location calculated. In a capacitive
touch screen, a capacitive layer stores electrical charge, which is
discharged to the user upon contact with the touch screen, causing
a decrease in the charge of the capacitive layer. The decrease is
measured, and the contacted location coordinates determined. In a
surface acoustic wave touch screen, an acoustic wave is transmitted
through the screen, and the acoustic wave is disturbed by user
contact. A receiving transducer detects the user contact instance
and determines the contacted location coordinates.
[0044] The term "window" refers to a, typically rectangular,
displayed image on at least part of a display that contains or
provides content different from the rest of the screen. The window
may obscure the desktop.
[0045] The terms "determine", "calculate" and "compute," and
variations thereof, as used herein, are used interchangeably and
include any type of methodology, process, mathematical operation or
technique.
[0046] It shall be understood that the term "means" as used herein
shall be given its broadest possible interpretation in accordance
with 35 U.S.C., Section 112, Paragraph 6. Accordingly, a claim
incorporating the term "means" shall cover all structures,
materials, or acts set forth herein, and all of the equivalents
thereof. Further, the structures, materials or acts and the
equivalents thereof shall include all those described in the
summary of the invention, brief description of the drawings,
detailed description, abstract, and claims themselves.
[0047] The preceding is a simplified summary of the disclosure to
provide an understanding of some aspects of the disclosure. This
summary is neither an extensive nor exhaustive overview of the
disclosure and its various aspects, embodiments, and/or
configurations. It is intended neither to identify key or critical
elements of the disclosure nor to delineate the scope of the
disclosure but to present selected concepts of the disclosure in a
simplified form as an introduction to the more detailed description
presented below. As will be appreciated, other aspects,
embodiments, and/or configurations of the disclosure are possible
utilizing, alone or in combination, one or more of the features set
forth above or described in detail below.
BRIEF DESCRIPTION OF THE DRAWINGS
[0048] FIG. 1A includes a first view of an embodiment of a
multi-screen user device;
[0049] FIG. 1B includes a second view of an embodiment of a
multi-screen user device;
[0050] FIG. 1C includes a third view of an embodiment of a
multi-screen user device;
[0051] FIG. 1D includes a fourth view of an embodiment of a
multi-screen user device;
[0052] FIG. 1E includes a fifth view of an embodiment of a
multi-screen user device;
[0053] FIG. 1F includes a sixth view of an embodiment of a
multi-screen user device;
[0054] FIG. 1G includes a seventh view of an embodiment of a
multi-screen user device;
[0055] FIG. 1H includes a eighth view of an embodiment of a
multi-screen user device;
[0056] FIG. 1I includes a ninth view of an embodiment of a
multi-screen user device;
[0057] FIG. 1J includes a tenth view of an embodiment of a
multi-screen user device;
[0058] FIG. 2 is a block diagram of an embodiment of the hardware
of the device;
[0059] FIG. 3A is a block diagram of an embodiment of the state
model for the device based on the device's orientation and/or
configuration;
[0060] FIG. 3B is a table of an embodiment of the state model for
the device based on the device's orientation and/or
configuration;
[0061] FIG. 4A is a first representation of an embodiment of user
gesture received at a device;
[0062] FIG. 4B is a second representation of an embodiment of user
gesture received at a device;
[0063] FIG. 4C is a third representation of an embodiment of user
gesture received at a device;
[0064] FIG. 4D is a fourth representation of an embodiment of user
gesture received at a device;
[0065] FIG. 4E is a fifth representation of an embodiment of user
gesture received at a device;
[0066] FIG. 4F is a sixth representation of an embodiment of user
gesture received at a device;
[0067] FIG. 4G is a seventh representation of an embodiment of user
gesture received at a device;
[0068] FIG. 4H is a eighth representation of an embodiment of user
gesture received at a device;
[0069] FIG. 5A is a block diagram of an embodiment of the device
software and/or firmware;
[0070] FIG. 5B is a second block diagram of an embodiment of the
device software and/or firmware;
[0071] FIG. 6A is a first representation of an embodiment of a
device configuration generated in response to the device state;
[0072] FIG. 6B is a second representation of an embodiment of a
device configuration generated in response to the device state;
[0073] FIG. 7A depicts a first display state of an open device with
various non-displayed windows stacked on either side of the visible
display in accordance with embodiments of the present
disclosure;
[0074] FIG. 7B depicts a second display state of an open device
with various non-displayed windows stacked on either side of the
visible display in accordance with embodiments of the present
disclosure;
[0075] FIG. 7C depicts a third display state of an open device with
various non-displayed windows stacked on either side of the visible
display in accordance with embodiments of the present
disclosure;
[0076] FIG. 8 is a flow diagram depicting an application launching
method from the revealed desktop in accordance with embodiments of
the present disclosure; and
[0077] FIG. 9 is a flow diagram depicting an application launching
method from the revealed desktop in accordance with embodiments of
the present disclosure.
[0078] In the appended figures, similar components and/or features
may have the same reference label. Further, various components of
the same type may be distinguished by following the reference label
by a letter that distinguishes among the similar components. If
only the first reference label is used in the specification, the
description is applicable to any one of the similar components
having the same first reference label irrespective of the second
reference label.
DETAILED DESCRIPTION
[0079] Presented herein are embodiments of a device. The device can
be a communications device, such as a cellular telephone, or other
smart device. The device can include two screens that are oriented
to provide several unique display configurations. Further, the
device can receive user input in unique ways. The overall design
and functionality of the device provides for an enhanced user
experience making the device more useful and more efficient.
[0080] Mechanical Features:
[0081] FIGS. 1A-1J illustrate a device 100 in accordance with
embodiments of the present disclosure. As described in greater
detail below, device 100 can be positioned in a number of different
ways each of which provides different functionality to a user. The
device 100 is a multi-screen device that includes a primary screen
104 and a secondary screen 108, both of which are touch sensitive.
In embodiments, the entire front surface of screens 104 and 108 may
be touch sensitive and capable of receiving input by a user
touching the front surface of the screens 104 and 108. Primary
screen 104 includes touch sensitive display 110, which, in addition
to being touch sensitive, also displays information to a user.
Secondary screen 108 includes touch sensitive display 114, which
also displays information to a user. In other embodiments, screens
104 and 108 may include more than one display area.
[0082] Primary screen 104 also includes a configurable area 112
that has been configured for specific inputs when the user touches
portions of the configurable area 112. Secondary screen 108 also
includes a configurable area 116 that has been configured for
specific inputs. Areas 112a and 116a have been configured to
receive a "back" input indicating that a user would like to view
information previously displayed. Areas 112b and 116b have been
configured to receive a "menu" input indicating that the user would
like to view options from a menu. Areas 112c and 116c have been
configured to receive a "home" input indicating that the user would
like to view information associated with a "home" view. In other
embodiments, areas 112a-c and 116a-c may be configured, in addition
to the configurations described above, for other types of specific
inputs including controlling features of device 100, some
non-limiting examples including adjusting overall system power,
adjusting the volume, adjusting the brightness, adjusting the
vibration, selecting of displayed items (on either of screen 104 or
108), operating a camera, operating a microphone, and
initiating/terminating of telephone calls. Also, in some
embodiments, areas 112a-C and 116a-C may be configured for specific
inputs depending upon the application running on device 100 and/or
information displayed on touch sensitive displays 110 and/or
114.
[0083] In addition to touch sensing, primary screen 104 and
secondary screen 108 may also include areas that receive input from
a user without requiring the user to touch the display area of the
screen. For example, primary screen 104 includes gesture capture
area 120, and secondary screen 108 includes gesture capture area
124. These areas are able to receive input by recognizing gestures
made by a user without the need for the user to actually touch the
surface of the display area. In comparison to touch sensitive
displays 110 and 114, the gesture capture areas 120 and 124 are
commonly not capable of rendering a displayed image.
[0084] The two screens 104 and 108 are connected together with a
hinge 128, shown clearly in FIG. 1C (illustrating a back view of
device 100). Hinge 128, in the embodiment shown in FIGS. 1A-1J, is
a center hinge that connects screens 104 and 108 so that when the
hinge is closed, screens 104 and 108 are juxtaposed (i.e.,
side-by-side) as shown in FIG. 1B (illustrating a front view of
device 100). Hinge 128 can be opened to position the two screens
104 and 108 in different relative positions to each other. As
described in greater detail below, the device 100 may have
different functionalities depending on the relative positions of
screens 104 and 108.
[0085] FIG. 1D illustrates the right side of device 100. As shown
in FIG. 1D, secondary screen 108 also includes a card slot 132 and
a port 136 on its side. Card slot 132 in embodiments, accommodates
different types of cards including a subscriber identity module
(SIM). Port 136 in embodiments is an input/output port (I/O port)
that allows device 100 to be connected to other peripheral devices,
such as a display, keyboard, or printing device. As can be
appreciated, these are merely some examples and in other
embodiments device 100 may include other slots and ports such as
slots and ports for accommodating additional memory devices and/or
for connecting other peripheral devices. Also shown in FIG. 1D is
an audio jack 140 that accommodates a tip, ring, sleeve (TRS)
connector for example to allow a user to utilize headphones or a
headset.
[0086] Device 100 also includes a number of buttons 158. For
example, FIG. 1E illustrates the left side of device 100. As shown
in FIG. 1E, the side of primary screen 104 includes three buttons
144, 148, and 152, which can be configured for specific inputs. For
example, buttons 144, 148, and 152 may be configured to, in
combination or alone, control a number of aspects of device 100.
Some non-limiting examples include overall system power, volume,
brightness, vibration, selection of displayed items (on either of
screen 104 or 108), a camera, a microphone, and
initiation/termination of telephone calls. In some embodiments,
instead of separate buttons two buttons may be combined into a
rocker button. This arrangement is useful in situations where the
buttons are configured to control features such as volume or
brightness. In addition to buttons 144, 148, and 152, device 100
also includes a button 156, shown in FIG. 1F, which illustrates the
top of device 100. In one embodiment, button 156 is configured as
an on/off button used to control overall system power to device
100. In other embodiments, button 156 is configured to, in addition
to or in lieu of controlling system power, control other aspects of
device 100. In some embodiments, one or more of the buttons 144,
148, 152, and 156 are capable of supporting different user
commands. By way of example, a normal press has a duration commonly
of less than about 1 second and resembles a quick tap. A medium
press has a duration commonly of 1 second or more but less than
about 12 seconds. A long press has a duration commonly of about 12
seconds or more. The function of the buttons is normally specific
to the application that is currently in focus on the respective
display 110 and 114. In a telephone application for instance and
depending on the particular button, a normal, medium, or long press
can mean end call, increase in call volume, decrease in call
volume, and toggle microphone mute. In a camera or video
application for instance and depending on the particular button, a
normal, medium, or long press can mean increase zoom, decrease
zoom, and take photograph or record video.
[0087] There are also a number of hardware components within device
100. As illustrated in FIG. 1C, device 100 includes a speaker 160
and a microphone 164. Device 100 also includes a camera 168 (FIG.
1B). Additionally, device 100 includes two position sensors 172A
and 172B, which are used to determine the relative positions of
screens 104 and 108. In one embodiment, position sensors 172A and
172B are Hall effect sensors. However, in other embodiments other
sensors can be used in addition to or in lieu of the Hall effect
sensors. An accelerometer 176 may also be included as part of
device 100 to determine the orientation of the device 100 and/or
the orientation of screens 104 and 108. Additional internal
hardware components that may be included in device 100 are
described below with respect to FIG. 2.
[0088] The overall design of device 100 allows it to provide
additional functionality not available in other communication
devices. Some of the functionality is based on the various
positions and orientations that device 100 can have. As shown in
FIGS. 1B-1G, device 100 can be operated in an "open" position where
screens 104 and 108 are juxtaposed. This position allows a large
display area for displaying information to a user. When position
sensors 172A and 172B determine that device 100 is in the open
position, they can generate a signal that can be used to trigger
different events such as displaying information on both screens 104
and 108. Additional events may be triggered if accelerometer 176
determines that device 100 is in a portrait position (FIG. 1B) as
opposed to a landscape position (not shown).
[0089] In addition to the open position, device 100 may also have a
"closed" position illustrated in FIG. 1H. Again, position sensors
172A and 172B can generate a signal indicating that device 100 is
in the "closed" position. This can trigger an event that results in
a change of displayed information on screen 104 and/or 108. For
example, device 100 may be programmed to stop displaying
information on one of the screens, e.g., screen 108, since a user
can only view one screen at a time when device 100 is in the
"closed" position. In other embodiments, the signal generated by
position sensors 172A and 172B, indicating that the device 100 is
in the "closed" position, can trigger device 100 to answer an
incoming telephone call. The "closed" position can also be a
preferred position for utilizing the device 100 as a mobile
phone.
[0090] Device 100 can also be used in an "easel" position which is
illustrated in FIG. 1I. In the "easel" position, screens 104 and
108 are angled with respect to each other and facing outward with
the edges of screens 104 and 108 substantially horizontal. In this
position, device 100 can be configured to display information on
both screens 104 and 108 to allow two users to simultaneously
interact with device 100. When device 100 is in the "easel"
position, sensors 172A and 172B generate a signal indicating that
the screens 104 and 108 are positioned at an angle to each other,
and the accelerometer 176 can generate a signal indicating that
device 100 has been placed so that the edge of screens 104 and 108
are substantially horizontal. The signals can then be used in
combination to generate events that trigger changes in the display
of information on screens 104 and 108.
[0091] FIG. 1J illustrates device 100 in a "modified easel"
position. In the "modified easel" position, one of screens 104 or
108 is used as a stand and is faced down on the surface of an
object such as a table. This position provides a convenient way for
information to be displayed to a user in landscape orientation.
Similar to the easel position, when device 100 is in the "modified
easel" position, position sensors 172A and 172B generate a signal
indicating that the screens 104 and 108 are positioned at an angle
to each other. The accelerometer 176 would generate a signal
indicating that device 100 has been positioned so that one of
screens 104 and 108 is faced downwardly and is substantially
horizontal. The signals can then be used to generate events that
trigger changes in the display of information of screens 104 and
108. For example, information may not be displayed on the screen
that is face down since a user cannot see the screen.
[0092] Transitional states are also possible. When the position
sensors 172A and B and/or accelerometer indicate that the screens
are being closed or folded (from open), a closing transitional
state is recognized. Conversely when the position sensors 172A and
B indicate that the screens are being opened or folded (from
closed), an opening transitional state is recognized. The closing
and opening transitional states are typically time-based, or have a
maximum time duration from a sensed starting point. Normally, no
user input is possible when one of the closing and opening states
is in effect. In this manner, incidental user contact with a screen
during the closing or opening function is not misinterpreted as
user input. In embodiments, another transitional state is possible
when the device 100 is closed. This additional transitional state
allows the display to switch from one screen 104 to the second
screen 108 when the device 100 is closed based on some user input,
e.g., a double tap on the screen 110,114.
[0093] As can be appreciated, the description of device 100 is made
for illustrative purposes only, and the embodiments are not limited
to the specific mechanical features shown in FIGS. 1A-1J and
described above. In other embodiments, device 100 may include
additional features, including one or more additional buttons,
slots, display areas, hinges, and/or locking mechanisms.
Additionally, in embodiments, the features described above may be
located in different parts of device 100 and still provide similar
functionality. Therefore, FIGS. 1A-1J and the description provided
above are nonlimiting.
[0094] Hardware Features:
[0095] FIG. 2 illustrates components of a device 100 in accordance
with embodiments of the present disclosure. In general, the device
100 includes a primary screen 104 and a secondary screen 108. While
the primary screen 104 and its components are normally enabled in
both the opened and closed positions or states, the secondary
screen 108 and its components are normally enabled in the opened
state but disabled in the closed state. However, even when in the
closed state a user or application triggered interrupt (such as in
response to a phone application or camera application operation)
can flip the active screen, or disable the primary screen 104 and
enable the secondary screen 108, by a suitable command. Each screen
104, 108 can be touch sensitive and can include different operative
areas. For example, a first operative area, within each touch
sensitive screen 104 and 108, may comprise a touch sensitive
display 110, 114. In general, the touch sensitive display 110, 114
may comprise a full color, touch sensitive display. A second area
within each touch sensitive screen 104 and 108 may comprise a
gesture capture region 120, 124. The gesture capture region 120,
124 may comprise an area or region that is outside of the touch
sensitive display 110, 114 area, and that is capable of receiving
input, for example in the form of gestures provided by a user.
However, the gesture capture region 120, 124 does not include
pixels that can perform a display function or capability.
[0096] A third region of the touch sensitive screens 104 and 108
may comprise a configurable area 112, 116. The configurable area
112, 116 is capable of receiving input and has display or limited
display capabilities. In embodiments, the configurable area 112,
116 may present different input options to the user. For example,
the configurable area 112, 116 may display buttons or other
relatable items. Moreover, the identity of displayed buttons, or
whether any buttons are displayed at all within the configurable
area 112, 116 of a touch sensitive screen 104 or 108, may be
determined from the context in which the device 100 is used and/or
operated. In an exemplary embodiment, the touch sensitive screens
104 and 108 comprise liquid crystal display devices extending
across at least those regions of the touch sensitive screens 104
and 108 that are capable of providing visual output to a user, and
a capacitive input matrix over those regions of the touch sensitive
screens 104 and 108 that are capable of receiving input from the
user.
[0097] One or more display controllers 216a, 216b may be provided
for controlling the operation of the touch sensitive screens 104
and 108, including input (touch sensing) and output (display)
functions. In the exemplary embodiment illustrated in FIG. 2, a
separate touch screen controller 216a or 216b is provided for each
touch screen 104 and 108. In accordance with alternate embodiments,
a common or shared touch screen controller 216 may be used to
control each of the included touch sensitive screens 104 and 108.
In accordance with still other embodiments, the functions of a
touch screen controller 216 may be incorporated into other
components, such as a processor 204.
[0098] The processor 204 may comprise a general purpose
programmable processor or controller for executing application
programming or instructions. In accordance with at least some
embodiments, the processor 204 may include multiple processor
cores, and/or implement multiple virtual processors. In accordance
with still other embodiments, the processor 204 may include
multiple physical processors. As a particular example, the
processor 204 may comprise a specially configured application
specific integrated circuit (ASIC) or other integrated circuit, a
digital signal processor, a controller, a hardwired electronic or
logic circuit, a programmable logic device or gate array, a special
purpose computer, or the like. The processor 204 generally
functions to run programming code or instructions implementing
various functions of the device 100.
[0099] A communication device 100 may also include memory 208 for
use in connection with the execution of application programming or
instructions by the processor 204, and for the temporary or long
term storage of program instructions and/or data. As examples, the
memory 208 may comprise RAM, DRAM, SDRAM, or other solid state
memory. Alternatively or in addition, data storage 212 may be
provided. Like the memory 208, the data storage 212 may comprise a
solid state memory device or devices. Alternatively or in addition,
the data storage 212 may comprise a hard disk drive or other random
access memory.
[0100] In support of communications functions or capabilities, the
device 100 can include a cellular telephony module 228. As
examples, the cellular telephony module 228 can comprise a GSM,
CDMA, FDMA and/or analog cellular telephony transceiver capable of
supporting voice, multimedia and/or data transfers over a cellular
network. Alternatively or in addition, the device 100 can include
an additional or other wireless communications module 232. As
examples, the other wireless communications module 232 can comprise
a Wi-Fi, BLUETOOTH.TM., WiMax, infrared, or other wireless
communications link. The cellular telephony module 228 and the
other wireless communications module 232 can each be associated
with a shared or a dedicated antenna 224.
[0101] A port interface 252 may be included. The port interface 252
may include proprietary or universal ports to support the
interconnection of the device 100 to other devices or components,
such as a dock, which may or may not include additional or
different capabilities from those integral to the device 100. In
addition to supporting an exchange of communication signals between
the device 100 and another device or component, the docking port
136 and/or port interface 252 can support the supply of power to or
from the device 100. The port interface 252 also comprises an
intelligent element that comprises a docking module for controlling
communications or other interactions between the device 100 and a
connected device or component.
[0102] An input/output module 248 and associated ports may be
included to support communications over wired networks or links,
for example with other communication devices, server devices,
and/or peripheral devices. Examples of an input/output module 248
include an Ethernet port, a Universal Serial Bus (USB) port,
Institute of Electrical and Electronics Engineers (IEEE) 1394, or
other interface.
[0103] An audio input/output interface/device(s) 244 can be
included to provide analog audio to an interconnected speaker or
other device, and to receive analog audio input from a connected
microphone or other device. As an example, the audio input/output
interface/device(s) 244 may comprise an associated amplifier and
analog to digital converter. Alternatively or in addition, the
device 100 can include an integrated audio input/output device 256
and/or an audio jack for interconnecting an external speaker or
microphone. For example, an integrated speaker and an integrated
microphone can be provided, to support near talk or speaker phone
operations.
[0104] Hardware buttons 158 can be included for example for use in
connection with certain control operations. Examples include a
master power switch, volume control, etc., as described in
conjunction with FIGS. 1A through 1J. One or more image capture
interfaces/devices 240, such as a camera, can be included for
capturing still and/or video images. Alternatively or in addition,
an image capture interface/device 240 can include a scanner or code
reader. An image capture interface/device 240 can include or be
associated with additional elements, such as a flash or other light
source.
[0105] The device 100 can also include a global positioning system
(GPS) receiver 236. In accordance with embodiments of the present
invention, the GPS receiver 236 may further comprise a GPS module
that is capable of providing absolute location information to other
components of the device 100. An accelerometer(s) 176 may also be
included. For example, in connection with the display of
information to a user and/or other functions, a signal from the
accelerometer 176 can be used to determine an orientation and/or
format in which to display that information to the user.
[0106] Embodiments of the present invention can also include one or
more position sensor(s) 172. The position sensor 172 can provide a
signal indicating the position of the touch sensitive screens 104
and 108 relative to one another. This information can be provided
as an input, for example to a user interface application, to
determine an operating mode, characteristics of the touch sensitive
displays 110, 114, and/or other device 100 operations. As examples,
a screen position sensor 172 can comprise a series of Hall effect
sensors, a multiple position switch, an optical switch, a
Wheatstone bridge, a potentiometer, or other arrangement capable of
providing a signal indicating of multiple relative positions the
touch screens are in.
[0107] Communications between various components of the device 100
can be carried by one or more buses 222. In addition, power can be
supplied to the components of the device 100 from a power source
and/or power control module 260. The power control module 260 can,
for example, include a battery, an AC to DC converter, power
control logic, and/or ports for interconnecting the device 100 to
an external source of power.
[0108] Device State:
[0109] FIGS. 3A and 3B represent illustrative states of device 100.
While a number of illustrative states are shown, and transitions
from a first state to a second state, it is to be appreciated that
the illustrative state diagram may not encompass all possible
states and/or all possible transitions from a first state to a
second state. As illustrated in FIG. 3, the various arrows between
the states (illustrated by the state represented in the circle)
represent a physical change that occurs to the device 100, that is
detected by one or more of hardware and software, the detection
triggering one or more of a hardware and/or software interrupt that
is used to control and/or manage one or more functions of device
100.
[0110] As illustrated in FIG. 3A, there are twelve exemplary
"physical" states: closed 304, transition 308 (or opening
transitional state), easel 312, modified easel 316, open 320,
inbound/outbound call or communication 324, image/video capture
328, transition 332 (or closing transitional state), landscape 340,
docked 336, docked 344 and landscape 348. Next to each illustrative
state is a representation of the physical state of the device 100
with the exception of states 324 and 328, where the state is
generally symbolized by the international icon for a telephone and
the icon for a camera, respectfully.
[0111] In state 304, the device is in a closed state with the
device 100 generally oriented in the portrait direction with the
primary screen 104 and the secondary screen 108 back-to-back in
different planes (see FIG. 1H). From the closed state, the device
100 can enter, for example, docked state 336, where the device 100
is coupled with a docking station, docking cable, or in general
docked or associated with one or more other devices or peripherals,
or the landscape state 340, where the device 100 is generally
oriented with the primary screen 104 facing the user, and the
primary screen 104 and the secondary screen 108 being
back-to-back.
[0112] In the closed state, the device can also move to a
transitional state where the device remains closed by the display
is moved from one screen 104 to another screen 108 based on a user
input, e.g., a double tap on the screen 110, 114. Still another
embodiment includes a bilateral state. In the bilateral state, the
device remains closed, but a single application displays at least
one window on both the first display 110 and the second display
114. The windows shown on the first and second display 110, 114 may
be the same or different based on the application and the state of
that application. For example, while acquiring an image with a
camera, the device may display the view finder on the first display
110 and displays a preview for the photo subjects (full screen and
mirrored left-to-right) on the second display 114.
[0113] In state 308, a transition state from the closed state 304
to the semi-open state or easel state 312, the device 100 is shown
opening with the primary screen 104 and the secondary screen 108
being rotated around a point of axis coincidence with the hinge.
Upon entering the easel state 312, the primary screen 104 and the
secondary screen 108 are separated from one another such that, for
example, the device 100 can sit in an easel-like configuration on a
surface.
[0114] In state 316, known as the modified easel position, the
device 100 has the primary screen 104 and the secondary screen 108
in a similar relative relationship to one another as in the easel
state 312, with the difference being one of the primary screen 104
or the secondary screen 108 are placed on a surface as shown.
[0115] State 320 is the open state where the primary screen 104 and
the secondary screen 108 are generally on the same plane. From the
open state, the device 100 can transition to the docked state 344
or the open landscape state 348. In the open state 320, the primary
screen 104 and the secondary screen 108 are generally in the
portrait-like orientation while in landscaped state 348 the primary
screen 104 and the secondary screen 108 are generally in a
landscape-like orientation.
[0116] State 324 is illustrative of a communication state, such as
when an inbound or outbound call is being received or placed,
respectively, by the device 100. While not illustrated for clarity,
it should be appreciated the device 100 can transition to the
inbound/outbound call state 324 from any state illustrated in FIG.
3. In a similar manner, the image/video capture state 328 can be
entered into from any other state in FIG. 3, with the image/video
capture state 328 allowing the device 100 to take one or more
images via a camera and/or videos with a video capture device
240.
[0117] Transition state 322 illustratively shows primary screen 104
and the secondary screen 108 being closed upon one another for
entry into, for example, the closed state 304.
[0118] FIG. 3B illustrates, with reference to the key, the inputs
that are received to detect a transition from a first state to a
second state. In FIG. 3B, various combinations of states are shown
with in general, a portion of the columns being directed toward a
portrait state 352, a landscape state 356, and a portion of the
rows being directed to portrait state 360 and landscape state
364.
[0119] In FIG. 3B, the Key indicates that "H" represents an input
from one or more Hall Effect sensors, "A" represents an input from
one or more accelerometers, "T" represents an input from a timer,
"P" represents a communications trigger input and "I" represents an
image and/or video capture request input. Thus, in the center
portion 376 of the chart, an input, or combination of inputs, are
shown that represent how the device 100 detects a transition from a
first physical state to a second physical state.
[0120] As discussed, in the center portion of the chart 376, the
inputs that are received enable the detection of a transition from,
for example, a portrait open state to a landscape easel
state--shown in bold--"HAT." For this exemplary transition from the
portrait open to the landscape easel state, a Hall Effect sensor
("H"), an accelerometer ("A") and a timer ("T") input may be
needed. The timer input can be derived from, for example, a clock
associated with the processor.
[0121] In addition to the portrait and landscape states, a docked
state 368 is also shown that is triggered based on the receipt of a
docking signal 372. As discussed above and in relation to FIG. 3,
the docking signal can be triggered by the association of the
device 100 with one or more other device 100s, accessories,
peripherals, smart docks, or the like.
[0122] User Interaction:
[0123] FIGS. 4A through 4H depict various graphical representations
of gesture inputs that may be recognized by the screens 104, 108.
The gestures may be performed not only by a user's body part, such
as a digit, but also by other devices, such as a stylus, that may
be sensed by the contact sensing portion(s) of a screen 104, 108.
In general, gestures are interpreted differently, based on where
the gestures are performed (either directly on the display 110, 114
or in the gesture capture region 120, 124). For example, gestures
in the display 110,114 may be directed to a desktop or application,
and gestures in the gesture capture region 120, 124 may be
interpreted as for the system.
[0124] With reference to FIGS. 4A-4H, a first type of gesture, a
touch gesture 420, is substantially stationary on the screen
104,108 for a selected length of time. A circle 428 represents a
touch or other contact type received at particular location of a
contact sensing portion of the screen. The circle 428 may include a
border 432, the thickness of which indicates a length of time that
the contact is held substantially stationary at the contact
location. For instance, a tap 420 (or short press) has a thinner
border 432a than the border 432b for a long press 424 (or for a
normal press). The long press 424 may involve a contact that
remains substantially stationary on the screen for longer time
period than that of a tap 420. As will be appreciated, differently
defined gestures may be registered depending upon the length of
time that the touch remains stationary prior to contact cessation
or movement on the screen.
[0125] With reference to FIG. 4C, a drag gesture 400 on the screen
104,108 is an initial contact (represented by circle 428) with
contact movement 436 in a selected direction. The initial contact
428 may remain stationary on the screen 104,108 for a certain
amount of time represented by the border 432. The drag gesture
typically requires the user to contact an icon, window, or other
displayed image at a first location followed by movement of the
contact in a drag direction to a new second location desired for
the selected displayed image. The contact movement need not be in a
straight line but have any path of movement so long as the contact
is substantially continuous from the first to the second
locations.
[0126] With reference to FIG. 4D, a flick gesture 404 on the screen
104,108 is an initial contact (represented by circle 428) with
truncated contact movement 436 (relative to a drag gesture) in a
selected direction. In embodiments, a flick has a higher exit
velocity for the last movement in the gesture compared to the drag
gesture. The flick gesture can, for instance, be a finger snap
following initial contact. Compared to a drag gesture, a flick
gesture generally does not require continual contact with the
screen 104,108 from the first location of a displayed image to a
predetermined second location. The contacted displayed image is
moved by the flick gesture in the direction of the flick gesture to
the predetermined second location. Although both gestures commonly
can move a displayed image from a first location to a second
location, the temporal duration and distance of travel of the
contact on the screen is generally less for a flick than for a drag
gesture.
[0127] With reference to FIG. 4E, a pinch gesture 408 on the screen
104,108 is depicted. The pinch gesture 408 may be initiated by a
first contact 428 a to the screen 104,108 by, for example, a first
digit and a second contact 428b to the screen 104,108 by, for
example, a second digit. The first and second contacts 428a,b may
be detected by a common contact sensing portion of a common screen
104,108, by different contact sensing portions of a common screen
104 or 108, or by different contact sensing portions of different
screens. The first contact 428a is held for a first amount of time,
as represented by the border 432a, and the second contact 428b is
held for a second amount of time, as represented by the border
432b. The first and second amounts of time are generally
substantially the same, and the first and second contacts 428 a, b
generally occur substantially simultaneously. The first and second
contacts 428 a, b generally also include corresponding first and
second contact movements 436 a, b, respectively. The first and
second contact movements 436 a, b are generally in opposing
directions. Stated another way, the first contact movement 436a is
towards the second contact 436b, and the second contact movement
436b is towards the first contact 436a. More simply stated, the
pinch gesture 408 may be accomplished by a user's digits touching
the screen 104,108 in a pinching motion.
[0128] With reference to FIG. 4F, a spread gesture 410 on the
screen 104,108 is depicted. The spread gesture 410 may be initiated
by a first contact 428a to the screen 104,108 by, for example, a
first digit and a second contact 428b to the screen 104,108 by, for
example, a second digit. The first and second contacts 428a,b may
be detected by a common contact sensing portion of a common screen
104,108, by different contact sensing portions of a common screen
104,108, or by different contact sensing portions of different
screens. The first contact 428a is held for a first amount of time,
as represented by the border 432a, and the second contact 428b is
held for a second amount of time, as represented by the border
432b. The first and second amounts of time are generally
substantially the same, and the first and second contacts 428 a, b
generally occur substantially simultaneously. The first and second
contacts 428 a, b generally also include corresponding first and
second contact movements 436a, b, respectively. The first and
second contact movements 436 a, b are generally in a common
direction. Stated another way, the first and second contact
movements 436 a, b are away from the first and second contacts
428a, b. More simply stated, the spread gesture 410 may be
accomplished by a user's digits touching the screen 104,108 in a
spreading motion.
[0129] The above gestures may be combined in any manner, such as
those shown by FIGS. 4G and 4H, to produce a determined functional
result. For example, in FIG. 4G a tap gesture 420 is combined with
a drag or flick gesture 412 in a direction away from the tap
gesture 420. In FIG. 4H, a tap gesture 420 is combined with a drag
or flick gesture 412 in a direction towards the tap gesture
420.
[0130] The functional result of receiving a gesture can vary
depending on a number of factors, including a state of the device
100, display 110, 114, or screen 104, 108, a context associated
with the gesture, or sensed location of the gesture. The state of
the device commonly refers to one or more of a configuration of the
device 100, a display orientation, and user and other inputs
received by the device 100. Context commonly refers to one or more
of the particular application(s) selected by the gesture and the
portion(s) of the application currently executing, whether the
application is a single- or multi-screen application, and whether
the application is a multi-screen application displaying one or
more windows in one or more screens or in one or more stacks.
Sensed location of the gesture commonly refers to whether the
sensed set(s) of gesture location coordinates are on a touch
sensitive display 110, 114 or a gesture capture region 120, 124,
whether the sensed set(s) of gesture location coordinates are
associated with a common or different display or screen 104,108,
and/or what portion of the gesture capture region contains the
sensed set(s) of gesture location coordinates.
[0131] A tap, when received by an a touch sensitive display 110,
114, can be used, for instance, to select an icon to initiate or
terminate execution of a corresponding application, to maximize or
minimize a window, to reorder windows in a stack, and to provide
user input such as by keyboard display or other displayed image. A
drag, when received by a touch sensitive display 110, 114, can be
used, for instance, to relocate an icon or window to a desired
location within a display, to reorder a stack on a display, or to
span both displays (such that the selected window occupies a
portion of each display simultaneously). A flick, when received by
a touch sensitive display 110, 114 or a gesture capture region 120,
124, can be used to relocate a window from a first display to a
second display or to span both displays (such that the selected
window occupies a portion of each display simultaneously). Unlike
the drag gesture, however, the flick gesture is generally not used
to move the displayed image to a specific user-selected location
but to a default location that is not configurable by the user.
[0132] The pinch gesture, when received by a touch sensitive
display 110, 114 or a gesture capture region 120, 124, can be used
to maximize or otherwise increase the displayed area or size of a
window (typically when received entirely by a common display), to
switch windows displayed at the top of the stack on each display to
the top of the stack of the other display (typically when received
by different displays or screens), or to display an application
manager (a "pop-up window" that displays the windows in the stack).
The spread gesture, when received by a touch sensitive display 110,
114 or a gesture capture region 120, 124, can be used to minimize
or otherwise decrease the displayed area or size of a window, to
switch windows displayed at the top of the stack on each display to
the top of the stack of the other display (typically when received
by different displays or screens), or to display an application
manager (typically when received by an off-screen gesture capture
region on the same or different screens).
[0133] The combined gestures of FIG. 4G, when received by a common
display capture region in a common display or screen 104,108, can
be used to hold a first window stack location in a first stack
constant for a display receiving the gesture while reordering a
second window stack location in a second window stack to include a
window in the display receiving the gesture. The combined gestures
of FIG. 4H, when received by different display capture regions in a
common display or screen 104,108 or in different displays or
screens, can be used to hold a first window stack location in a
first window stack constant for a display receiving the tap part of
the gesture while reordering a second window stack location in a
second window stack to include a window in the display receiving
the flick or drag gesture. Although specific gestures and gesture
capture regions in the preceding examples have been associated with
corresponding sets of functional results, it is to be appreciated
that these associations can be redefined in any manner to produce
differing associations between gestures and/or gesture capture
regions and/or functional results.
[0134] Firmware and Software:
[0135] The memory 508 may store and the processor 504 may execute
one or more software components. These components can include at
least one operating system (OS) 516a and/or 516b, a framework 520,
and/or one or more applications 564a and/or 564b from an
application store 560. The processor 504 may receive inputs from
drivers 512, previously described in conjunction with FIG. 2. The
OS 516 can be any software, consisting of programs and data, that
manages computer hardware resources and provides common services
for the execution of various applications 564. The OS 516 can be
any operating system and, at least in some embodiments, dedicated
to mobile devices, including, but not limited to, Linux,
ANDROID.TM., iPhone OS (IOS.TM.), WINDOWS PHONE 7.TM., etc. The OS
516 is operable to provide functionality to the phone by executing
one or more operations, as described herein.
[0136] The applications 564 can be any higher level software that
executes particular functionality for the user. Applications 564
can include programs such as email clients, web browsers, texting
applications, games, media players, office suites, etc. The
applications 564 can be stored in an application store 560, which
may represent any memory or data storage, and the management
software associated therewith, for storing the applications 564.
Once executed, the applications 564 may be run in a different area
of memory 508.
[0137] The framework 520 may be any software or data that allows
the multiple tasks running on the device to interact. In
embodiments, at least portions of the framework 520 and the
discrete components described hereinafter may be considered part of
the OS 516 or an application 564. However, these portions will be
described as part of the framework 520, but those components are
not so limited. The framework 520 can include, but is not limited
to, a Multi-Display Management (MDM) module 524, a Surface Cache
module 528, a Window Management module 532, an Input Management
module 536, a Task Management module 540, a Display Controller, one
or more frame buffers 548, a task stack 552, one or more window
stacks 550 (which is a logical arrangement of windows and/or
desktops in a display area), and/or an event buffer 556.
[0138] The MDM module 524 includes one or more modules that are
operable to manage the display of applications or other data on the
screens of the device. An embodiment of the MDM module 524 is
described in conjunction with FIG. 5B. In embodiments, the MDM
module 524 receives inputs from the OS 516, the drivers 512 and the
applications 564. The inputs assist the MDM module 524 in
determining how to configure and allocate the displays according to
the application's preferences and requirements, and the user's
actions. Once a determination for display configurations is
determined, the MDM module 524 can bind the applications 564 to a
display configuration. The configuration may then be provided to
one or more other components to generate the display.
[0139] The Surface Cache module 528 includes any memory or storage
and the software associated therewith to store or cache one or more
images from the display screens. Each display screen may have
associated with the screen a series of active and non-active
windows (or other display objects (such as a desktop display)). The
active window (or other display object) is currently being
displayed. The non-active windows (or other display objects) were
opened and/or at some time displayed but are now "behind" the
active window (or other display object). To enhance the user
experience, before being covered by another active window (or other
display object), a "screen shot" of a last generated image of the
window (or other display object) can be stored. The Surface Cache
module 528 may be operable to store the last active image of a
window (or other display object) not currently displayed. Thus, the
Surface Cache module 528 stores the images of non-active windows
(or other display objects) in a data store (not shown).
[0140] In embodiments, the Window Management module 532 is operable
to manage the windows (or other display objects) that are active or
not active on each of the screens. The Window Management module
532, based on information from the MDM module 524, the OS 516, or
other components, determines when a window (or other display
object) is active or not active. The Window Management module 532
may then put a non-visible window (or other display object) in a
"not active state" and, in conjunction with the Task Management
module Task Management 540 suspend the application's operation.
Further, the Window Management module 532 may assign a screen
identifier to the window (or other display object) or manage one or
more other items of data associated with the window (or other
display object). The Window Management module 532 may also provide
the stored information to the application 564, the Task Management
module 540, or other components interacting with or associated with
the window (or other display object).
[0141] The Input Management module 536 is operable to manage events
that occur with the device. An event is any input into the window
environment, for example, a user interface interactions with a
user. The Input Management module 536 receives the events and
logically stores the events in an event buffer 556. Events can
include such user interface interactions as a "down event," which
occurs when a screen 104, 108 receives a touch signal from a user,
a "move event," which occurs when the screen 104, 108 determines
that a user's finger is moving across a screen(s), an "up event,
which occurs when the screen 104, 108 determines that the user has
stopped touching the screen 104, 108, etc. These events are
received, stored, and forwarded to other modules by the Input
Management module 536.
[0142] A task can be an application component that provides a
screen with which users can interact in order to do something, such
as dial the phone, take a photo, send an email, or view a map. Each
task may be given a window in which to draw a user interface. The
window typically fills the display 110,114, but may be smaller than
the display 110,114 and float on top of other windows. An
application usually consists of multiple activities that are
loosely bound to each other. Typically, one task in an application
is specified as the "main" task, which is presented to the user
when launching the application for the first time. Each task can
then start another task to perform different actions.
[0143] The Task Management module 540 is operable to manage the
operation of the one or more applications 564 that may be executed
by the device. Thus, the Task Management module 540 can receive
signals to execute an application stored in the application store
560. The Task Management module 540 may then instantiate one or
more tasks or components of the application 564 to begin operation
of the application 564. Further, the Task Management module 540 may
suspend the application 564 based on user interface changes.
Suspending the application 564 may maintain application data in
memory but may limit or stop access to processor cycles for the
application 564. Once the application becomes active again, the
Task Management module 540 can again provide access to the
processor.
[0144] The Display Controller 544 is operable to render and output
the display(s) for the multi-screen device. In embodiments, the
Display Controller 544 creates and/or manages one or more frame
buffers 548. A frame buffer 548 can be a display output that drives
a display from a portion of memory containing a complete frame of
display data. In embodiments, the Display Controller 544 manages
one or more frame buffers. One frame buffer may be a composite
frame buffer that can represent the entire display space of both
screens. This composite frame buffer can appear as a single frame
to the OS 516. The Display Controller 544 can sub-divide this
composite frame buffer as required for use by each of the displays
110, 114. Thus, by using the Display Controller 544, the device 100
can have multiple screen displays without changing the underlying
software of the OS 516.
[0145] The Application Manager 562 can be a service that provides
the presentation layer for the window environment. Thus, the
Application Manager 562 provides the graphical model for rendering
by the Window Management Module 556. Likewise, the Desktop 566
provides the presentation layer for the Application Store 560.
Thus, the desktop provides a graphical model of a surface having
selectable application icons for the Applications 564 in the
Application Store 560 that can be provided to the Window Management
Module 556 for rendering.
[0146] An embodiment of the MDM module 524 is shown in FIG. 5B. The
MDM module 524 is operable to determine the state of the
environment for the device, including, but not limited to, the
orientation of the device, what applications 564 are executing, how
the applications 564 are to be displayed, what actions the user is
conducting, the tasks being displayed, etc. To configure the
display, the MDM module 524 interprets these environmental factors
and determines a display configuration, as described in conjunction
with FIGS. 6A-6J. Then, the MDM module 524 can bind the
applications 564 or other device components to the displays. The
configuration may then be sent to the Display Controller 544 and/or
the OS 516 to generate the display. The MDM module 524 can include
one or more of, but is not limited to, a Display Configuration
Module 568, a Preferences Module 572, a Device State Module 574, a
Gesture Module 576, a Requirements Module 580, an Event Module 584,
and/or a Binding Module 588.
[0147] The Display Configuration Module 568 determines the layout
for the display. In embodiments, the Display Configuration Module
568 can determine the environmental factors. The environmental
factors may be received from one or more other MDM module 524
modules or from other sources. The Display Configuration Module 568
can then determine from the list of factors the best configuration
for the display. Some embodiments of the possible configurations
and the factors associated therewith are described in conjunction
with FIGS. 6A-6F.
[0148] The Preferences Module 572 is operable to determine display
preferences for an application 564 or other component. For example,
an application can have a preference for Single or Dual displays.
The Preferences Module 572 can determine or receive the application
preferences and store the preferences. As the configuration of the
device changes, the preferences may be reviewed to determine if a
better display configuration can be achieved for the application
564.
[0149] The Device State Module 574 is operable to determine or
receive the state of the device. The state of the device can be as
described in conjunction with FIGS. 3A and 3B. The state of the
device can be used by the Display Configuration Module 568 to
determine the configuration for the display. As such, the Device
State Module 574 may receive inputs and interpret the state of the
device. The state information is then provided to the Display
Configuration Module 568.
[0150] The Gesture Module 576 is operable to determine if the user
is conducting any actions on the user interface. Thus, the Gesture
Module 576 can receive task information either from the task stack
552 or the Input Management module 536. These gestures may be as
defined in conjunction with FIGS. 4A through 4H. For example,
moving a window causes the display to render a series of display
frames that illustrate the window moving. The gesture associated
with such user interface interaction can be received and
interpreted by the Gesture Module 576. The information about the
user gesture is then sent to the Task Management Module 540 to
modify the display binding of the task.
[0151] The Requirements Module 580, similar to the Preferences
Module 572, is operable to determine display requirements for an
application 564 or other component. An application can have a set
display requirement that must be observed. Some applications
require a particular display orientation. For example, the
application "Angry Birds" can only be displayed in landscape
orientation. This type of display requirement can be determined or
received, by the Requirements Module 580. As the orientation of the
device changes, the Requirements Module 580 can reassert the
display requirements for the application 564. The Display
Configuration Module 568 can generate a display configuration that
is in accordance with the application display requirements, as
provided by the Requirements Module 580.
[0152] The Event Module 584, similar to the Gesture Module 576, is
operable to determine one or more events occurring with an
application or other component that can affect the user interface.
Thus, the Gesture Module 576 can receive event information either
from the event buffer 556 or the Task Management module 540. These
events can change how the tasks are bound to the displays. For
example, an email application receiving an email can cause the
display to render the new message in a secondary screen. The events
associated with such application execution can be received and
interpreted by the Event Module 584. The information about the
events then may be sent to the Display Configuration Module 568 to
modify the configuration of the display.
[0153] The Binding Module 588 is operable to bind the applications
564 or the other components to the configuration determined by the
Display Configuration Module 568. A binding associates, in memory,
the display configuration for each application with the display and
mode of the application. Thus, the Binding Module 588 can associate
an application with a display configuration for the application
(e.g. landscape, portrait, multi-screen, etc.). Then, the Binding
Module 588 may assign a display identifier to the display. The
display identifier associated the application with a particular
screen of the device. This binding is then stored and provided to
the Display Controller 544, the OS 516, or other components to
properly render the display. The binding is dynamic and can change
or be updated based on configuration changes associated with
events, gestures, state changes, application preferences or
requirements, etc.
[0154] User Interface Configurations:
[0155] With reference now to FIGS. 6A-J, various types of output
configurations made possible by the device 100 will be described
hereinafter.
[0156] FIGS. 6A and 6B depict two different output configurations
of the device 100 being in a first state. Specifically, FIG. 6A
depicts the device 100 being in a closed portrait state 304 where
the data is displayed on the primary screen 104. In this example,
the device 100 displays data via the touch sensitive display 110 in
a first portrait configuration 604. As can be appreciated, the
first portrait configuration 604 may only display a desktop or
operating system home screen. Alternatively, one or more windows
may be presented in a portrait orientation while the device 100 is
displaying data in the first portrait configuration 604.
[0157] FIG. 6B depicts the device 100 still being in the closed
portrait state 304, but instead data is displayed on the secondary
screen 108. In this example, the device 100 displays data via the
touch sensitive display 114 in a second portrait configuration
608.
[0158] It may be possible to display similar or different data in
either the first or second portrait configuration 604, 608. It may
also be possible to transition between the first portrait
configuration 604 and second portrait configuration 608 by
providing the device 100 a user gesture (e.g., a double tap
gesture), a menu selection, or other means. Other suitable gestures
may also be employed to transition between configurations.
Furthermore, it may also be possible to transition the device 100
from the first or second portrait configuration 604, 608 to any
other configuration described herein depending upon which state the
device 100 is moved.
[0159] An alternative output configuration may be accommodated by
the device 100 being in a second state. Specifically, FIG. 6C
depicts a third portrait configuration where data is displayed
simultaneously on both the primary screen 104 and the secondary
screen 108. The third portrait configuration may be referred to as
a Dual-Portrait (PD) output configuration. In the PD output
configuration, the touch sensitive display 110 of the primary
screen 104 depicts data in the first portrait configuration 604
while the touch sensitive display 114 of the secondary screen 108
depicts data in the second portrait configuration 608. The
simultaneous presentation of the first portrait configuration 604
and the second portrait configuration 608 may occur when the device
100 is in an open portrait state 320. In this configuration, the
device 100 may display one application window in one display 110 or
114, two application windows (one in each display 110 and 114), one
application window and one desktop, or one desktop. Other
configurations may be possible. It should be appreciated that it
may also be possible to transition the device 100 from the
simultaneous display of configurations 604, 608 to any other
configuration described herein depending upon which state the
device 100 is moved. Furthermore, while in this state, an
application's display preference may place the device into
bilateral mode, in which both displays are active to display
different windows in the same application. For example, a Camera
application may display a viewfinder and controls on one side,
while the other side displays a mirrored preview that can be seen
by the photo subjects. Games involving simultaneous play by two
players may also take advantage of bilateral mode.
[0160] FIGS. 6D and 6E depicts two further output configurations of
the device 100 being in a third state. Specifically, FIG. 6D
depicts the device 100 being in a closed landscape state 340 where
the data is displayed on the primary screen 104. In this example,
the device 100 displays data via the touch sensitive display 110 in
a first landscape configuration 612. Much like the other
configurations described herein, the first landscape configuration
612 may display a desktop, a home screen, one or more windows
displaying application data, or the like.
[0161] FIG. 6E depicts the device 100 still being in the closed
landscape state 340, but instead data is displayed on the secondary
screen 108. In this example, the device 100 displays data via the
touch sensitive display 114 in a second landscape configuration
616. It may be possible to display similar or different data in
either the first or second portrait configuration 612, 616. It may
also be possible to transition between the first landscape
configuration 612 and second landscape configuration 616 by
providing the device 100 with one or both of a twist and tap
gesture or a flip and slide gesture. Other suitable gestures may
also be employed to transition between configurations. Furthermore,
it may also be possible to transition the device 100 from the first
or second landscape configuration 612, 616 to any other
configuration described herein depending upon which state the
device 100 is moved.
[0162] FIG. 6F depicts a third landscape configuration where data
is displayed simultaneously on both the primary screen 104 and the
secondary screen 108. The third landscape configuration may be
referred to as a Dual-Landscape (LD) output configuration. In the
LD output configuration, the touch sensitive display 110 of the
primary screen 104 depicts data in the first landscape
configuration 612 while the touch sensitive display 114 of the
secondary screen 108 depicts data in the second landscape
configuration 616. The simultaneous presentation of the first
landscape configuration 612 and the second landscape configuration
616 may occur when the device 100 is in an open landscape state
340. It should be appreciated that it may also be possible to
transition the device 100 from the simultaneous display of
configurations 612, 616 to any other configuration described herein
depending upon which state the device 100 is moved.
[0163] FIGS. 6G and 6H depict two views of a device 100 being in
yet another state. Specifically, the device 100 is depicted as
being in an easel state 312. FIG. 6G shows that a first easel
output configuration 618 may be displayed on the touch sensitive
display 110. FIG. 6H shows that a second easel output configuration
620 may be displayed on the touch sensitive display 114. The device
100 may be configured to depict either the first easel output
configuration 618 or the second easel output configuration 620
individually. Alternatively, both the easel output configurations
618, 620 may be presented simultaneously. In some embodiments, the
easel output configurations 618, 620 may be similar or identical to
the landscape output configurations 612, 616. The device 100 may
also be configured to display one or both of the easel output
configurations 618, 620 while in a modified easel state 316. It
should be appreciated that simultaneous utilization of the easel
output configurations 618, 620 may facilitate two-person games
(e.g., Battleship.RTM., chess, checkers, etc.), multi-user
conferences where two or more users share the same device 100, and
other applications. As can be appreciated, it may also be possible
to transition the device 100 from the display of one or both
configurations 618, 620 to any other configuration described herein
depending upon which state the device 100 is moved.
[0164] FIG. 6I depicts yet another output configuration that may be
accommodated while the device 100 is in an open portrait state 320.
Specifically, the device 100 may be configured to present a single
continuous image across both touch sensitive displays 110, 114 in a
portrait configuration referred to herein as a Portrait-Max (PMax)
configuration 624. In this configuration, data (e.g., a single
image, application, window, icon, video, etc.) may be split and
displayed partially on one of the touch sensitive displays while
the other portion of the data is displayed on the other touch
sensitive display. The Pmax configuration 624 may facilitate a
larger display and/or better resolution for displaying a particular
image on the device 100. Similar to other output configurations, it
may be possible to transition the device 100 from the Pmax
configuration 624 to any other output configuration described
herein depending upon which state the device 100 is moved.
[0165] FIG. 6J depicts still another output configuration that may
be accommodated while the device 100 is in an open landscape state
348. Specifically, the device 100 may be configured to present a
single continuous image across both touch sensitive displays 110,
114 in a landscape configuration referred to herein as a
Landscape-Max (LMax) configuration 628. In this configuration, data
(e.g., a single image, application, window, icon, video, etc.) may
be split and displayed partially on one of the touch sensitive
displays while the other portion of the data is displayed on the
other touch sensitive display. The Lmax configuration 628 may
facilitate a larger display and/or better resolution for displaying
a particular image on the device 100. Similar to other output
configurations, it may be possible to transition the device 100
from the Lmax configuration 628 to any other output configuration
described herein depending upon which state the device 100 is
moved.
[0166] Display Controls:
[0167] FIGS. 7A-C depict graphical representations of embodiments
of a device 100 in an open portrait state 320 where the primary
screen 104 displays application data in a first portrait
configuration 604 and the secondary screen 108 displays application
data in a second portrait configuration 608. Other pages 704, 712
that are not visible on the primary screen 104 or the secondary
screen 108 are graphically represented as surrounded by dashed
lines.
[0168] In one embodiment of the device 100, the MDM class 524
arranges applications, desktops, and/or other displayable
information through the Window Management class 532 onto separate
pages 704, 708, 712, 716 and organizes those pages 704, 708, 712,
716 into virtual "stacks." These stacks allow pages to be created,
deleted, shuffled, and/or moved by a user or an output from the MDM
class 524, similar to a user manipulating the virtual playing cards
of a deck stack. In FIGS. 7A-C, the stacks of individual pages 704,
708, 712, 716 are shown spread outwardly in a linear format on both
the left side of the primary screen 104 as well the right side of
the secondary screen 108 to better visualize the organization of
the virtual stacks. The position of a page from the primary screen
104 or secondary screen 108 is the position of the page in each
respective stack stored on the device 100. Therefore, if a page is
graphically represented immediately adjacent to the primary screen
104, or immediately adjacent to the secondary screen 108, it is the
first page in the stack that may be moved onto that adjacent screen
with a user input gesture. Although only several pages of a stack
are shown in FIGS. 7A-C, the number of pages available to be
organized in the stack can vary and, in some embodiments, may equal
the number of applications, desktops, and/or other displayable
information created on the device 100. Moreover, the stack adjacent
to the primary screen 104 and the stack adjacent to the secondary
screen 108 may act as a unified stack (where a move or manipulation
of a page from one stack moves or manipulates the pages in both
stacks), or as independent stacks (each screen having its own
independently navigable stack of pages).
[0169] In an embodiment of the present disclosure, it is
anticipated that the concept of stacks disclosed herein may also be
applied to desktops available on the device 100. In other words, a
desktop virtual stack may be used for the desktops which can be
displayed by the device 100 and an application virtual stack may be
used for the applications which can be displayed by the device 100.
Specifically, when a desktop is revealed it may be divided into
multiple pages for display on multiple screens of the device 100.
These pages may be manipulated and/or stored in their own desktop
virtual stacks separate from the application virtual stacks and/or
other virtual stacks. This separation of stacks allows for greater
user flexibility in navigating through different applications or
desktops by creating an intuitive interface dependent on the data
displayed to the screens 104, 108. However, it is also anticipated
that these stacks could be combined to form a single virtual
stack.
[0170] FIG. 7A shows a graphical representation of the device 100
running three separate applications. As shown, the first
application is running in a dual-screen mode, also known as
maximized, on both the primary screen 104 and the secondary screen
108. The device 100 is shown in an open portrait state 320. In the
present embodiment, both the primary screen 104 and secondary
screen 108 are visible while the first application is running.
Although not displayed to the primary screen 104 or secondary
screen 108, two applications are running on separate pages in the
virtual stack. Specifically, the first stack first page 704 and the
second stack first page 712 are running different applications
(e.g., second application and third application).
[0171] FIG. 7B shows an embodiment of the present device 100 where
the desktop reveal expansion has been initiated. The device 100 is
shown in an open portrait state 320. The desktop reveal expansion
may be initiated by, (1) a user input gesture, (2) combination of
user input gestures, (3) a memory output, (4) a response to a
predetermined condition (e.g., application control, power levels,
communications interrupt, operating system state, device screen
state open/closed, timers, and single or multiple sensor outputs)
or (5) any combination thereof, that the MDM class 524 registers
and interprets as an initiation command. As illustrated in FIG. 7B,
once the desktop reveal expansion is initiated, the MDM class 524
and the Window Management class 532 split the dual-screen
application into two separate pages and move the first page of the
dual-screen application to the first stack first page 704 (shown on
the left-hand side of the screen) and the second page of the
dual-screen application to the second stack first page 712 (shown
on the right-hand side of the screen). At this time, the
applications that were running on the first stack first page 704
and the second stack first page 712, are moved to the first stack
second page 708 and the second stack second page 716, respectively.
The first and second page of the desktop is now visible on the
primary screen 104 and the secondary screen 108, respectively. From
either screen 104, 108 of the displayed desktop, a user may select
an application by providing a user input gesture 720. This user
input gesture 720 is registered by the MDM class 524 where it is
interpreted as a command to deactivate the reveal desktop and open
a new application on the screen from which the user input gesture
720 was detected. In some embodiments, the new application may
display onto a screen other than that from which the user input was
detected. Specifically, it is anticipated that the MDM class 524
can determine through logic, rules, user input, or combinations
thereof to open a new application on either screen of the device
100, in accordance with predetermined conditions.
[0172] In embodiments where multiple pages are open in the first
and second stack, an initiation of the desktop reveal expansion
will move each page one position further away from the primary and
secondary screens 104, 108, respectively. It is anticipated that at
least one embodiment of the present disclosure provides that pages
in the first stack, as well as pages in the second stack, may move
together as if the pages were virtually connected. Therefore, a
single position move of one page in a stack correspondingly moves
all of the pages in that stack and the other stack by a single
position. In addition, it is anticipated that this concept can also
be applied to the pages of a virtual desktop stack.
[0173] FIG. 7C shows an embodiment of the present device 100, in an
open portrait state 320, wherein a new application has been chosen
from the secondary screen 108 and is transitioning 724 onto the
secondary screen 108 where it will be displayed. As the new
application is transitioning 724 onto the secondary screen 108, the
reveal desktop is deactivated by the MDM class 524 and the
dual-screen application is displayed on the primary screen 104 by
the Window Management class 532. In at least some embodiments, the
visual content of the dual-screen application presented by the MDM
class 524 and the Window Management class 532 may depend on the
screen from which the new application is chosen by the user input
gesture 720. In some cases, the MDM class 524 and Window Management
class 532 may present the first stack information or the second
stack information as defined by protocol and/or logic.
[0174] Referring now to FIG. 8, a method for launching applications
into reveal desktop 800 will be described in accordance with at
least some embodiments of the present disclosure. The method is
initiated at step 804. Specifically, a device 100 is depicted as
initially running a dual-screen application spread across the
primary screen 104 and the secondary screen 108 while in an open
portrait state 320. The method continues when the device 100
detects a desktop reveal input (step 812) from the user. In some
embodiments, this input is registered at the MDM class 524.
[0175] Next, the MDM class 524 must move the dual-screen
application off the primary and secondary screens 104, 108 onto a
virtual stack or stacks. In some embodiments, the dual-screen
application may be logically divided into two separate pages where
one of the pages is stored in the first stack and the other page is
stored in the second stack. As the dual-screen application is
removed from display (step 816 and 820), the desktop is revealed on
the primary and secondary screens 104, 108 (step 824).
[0176] Once the desktop is revealed, the MDM class 524 detects a
user gesture input (step 828). The nature of the user gesture input
will determine whether the reveal desktop function is deactivated
or if an application is chosen to launch 836 from the desktop. In
the event that a user selects to launch an application from the
desktop, the MDM class 524 must make at least two determinations.
First, which application was chosen by the user and second, from
which screen was the user gesture input made. Once the MDM class
524 registers the application selection and user gesture input
location, the MDM class 524 and the Window Management class 532,
uses the information to launch the chosen application for display
by the screen from which it was chosen. For example, if the user
selected an application from the secondary screen 108, the
application would launch and be displayed on the secondary screen
108. In contrast, if the user selected an application from the
primary screen 104, then the application would launch and be
displayed on the primary screen 104.
[0177] After or while the MDM class 524 and Window Management class
532 launches and displays the application to the chosen screen, the
reveal desktop is deactivated and one of the dual-screen
applications (stored in the virtual stack) is moved onto the
display that was not chosen. In some embodiments, the device 100
may be running and displaying a single-screen application on the
primary screen 104 and a single-screen application on the secondary
display 108. In which case, when the desktop is first revealed, the
applications are moved from their respective screens into their
respective virtual stacks. Next, and as described above, when a new
application is launched from the revealed desktop it displays on
the screen from which it was chosen. However, the other screen will
display one of the previously running and displayed applications.
The decision of which application to display on the other screen
depends on the application priority, the sequence of applications
created, and/or other determinative functions.
[0178] FIG. 9 shows a method for launching applications into reveal
desktop 900 in accordance with at least some embodiments of the
present disclosure. The method is initiated at step 904.
Specifically, a device 100 may be in the process of displaying
application data to the primary screen 104 and/or the secondary
screen 108. Any application data displayed to either the primary
screen 104 and/or the secondary screen 108 is detected (step 908).
The method continues when the device 100 detects a desktop reveal
input (step 912) from the user. In some embodiments, this input is
registered at the MDM class 524.
[0179] Next, if the device 100 was displaying an application to
either screen 104, 108, as detected at step 908, the MDM class 524
must move the application off the primary and/or secondary screen
104, 108 onto a virtual stack or stacks. In some embodiments, an
application displayed to the primary screen 104 may be stored in
the first stack while an application displayed to the secondary
screen 108 may be stored in the second stack. As an application is
removed from display (step 920), the desktop is revealed on the
primary and secondary screens 104, 108 (step 924). In the event
that the device 100 is not displaying application data, the desktop
is immediately revealed on the primary and secondary screens 104,
108 (step 924).
[0180] Once the desktop is revealed, the MDM class 524 detects a
user gesture input (step 928). The nature of the user gesture input
will determine whether the reveal desktop function is deactivated
or if an application is chosen to launch 936 from the desktop. In
the event that a user selects to launch an application from the
desktop, the MDM class 524 must make at least two determinations.
First, which application was chosen by the user and second, which
screen will display the application. In some embodiments, the
second determination may result in the application being displayed
on a screen in accordance with certain predetermined conditions.
These predetermined conditions may include rules, logic,
application priority, applications creation sequence, and/or other
determinative functions. Once the MDM class 524 registers the
application selection in accordance with the predetermined
conditions, the MDM class 524 and the Window Management class 532,
uses the information to launch the chosen application for display
by a specific screen (step 940).
[0181] After or while the MDM class 524 and Window Management class
532 launches and displays the application to the specific screen,
the reveal desktop is deactivated and the other screen is
configured to display a previously detected application (at step
908) or other data (step 948). The decision of which application,
or data, to display on the other screen may depend on the
application priority, the sequence of applications created, and/or
other determinative functions.
[0182] The exemplary systems and methods of this disclosure have
been described in relation to display controls for a multi-screen
device. However, to avoid unnecessarily obscuring the present
disclosure, the preceding description omits a number of known
structures and devices. This omission is not to be construed as a
limitation of the scopes of the claims. Specific details are set
forth to provide an understanding of the present disclosure. It
should however be appreciated that the present disclosure may be
practiced in a variety of ways beyond the specific detail set forth
herein.
[0183] Furthermore, while the exemplary aspects, embodiments,
and/or configurations illustrated herein show the various
components of the system collocated, certain components of the
system can be located remotely, at distant portions of a
distributed network, such as a LAN and/or the Internet, or within a
dedicated system. Thus, it should be appreciated, that the
components of the system can be combined in to one or more devices,
such as a Personal Computer (PC), laptop, netbook, Personal Digital
Assistant (PDA), tablet, etc., or collocated on a particular node
of a distributed network, such as an analog and/or digital
telecommunications network, a packet-switch network, or a
circuit-switched network. It will be appreciated from the preceding
description, and for reasons of computational efficiency, that the
components of the system can be arranged at any location within a
distributed network of components without affecting the operation
of the system. For example, the various components can be located
in a switch such as a PBX and media server, gateway, in one or more
communications devices, at one or more users' premises, or some
combination thereof. Similarly, one or more functional portions of
the system could be distributed between a telecommunications
device(s) and an associated computing device.
[0184] Furthermore, it should be appreciated that the various links
connecting the elements can be wired or wireless links, or any
combination thereof, or any other known or later developed
element(s) that is capable of supplying and/or communicating data
to and from the connected elements. These wired or wireless links
can also be secure links and may be capable of communicating
encrypted information. Transmission media used as links, for
example, can be any suitable carrier for electrical signals,
including coaxial cables, copper wire and fiber optics, and may
take the form of acoustic or light waves, such as those generated
during radio-wave and infra-red data communications.
[0185] Also, while the flowcharts have been discussed and
illustrated in relation to a particular sequence of events, it
should be appreciated that changes, additions, and omissions to
this sequence can occur without materially affecting the operation
of the disclosed embodiments, configuration, and aspects.
[0186] A number of variations and modifications of the disclosure
can be used. It would be possible to provide for some features of
the disclosure without providing others.
[0187] In some embodiments, the systems and methods of this
disclosure can be implemented in conjunction with a special purpose
computer, a programmed microprocessor or microcontroller and
peripheral integrated circuit element(s), an ASIC or other
integrated circuit, a digital signal processor, a hard-wired
electronic or logic circuit such as discrete element circuit, a
programmable logic device or gate array such as PLD, PLA, FPGA,
PAL, special purpose computer, any comparable means, or the like.
In general, any device(s) or means capable of implementing the
methodology illustrated herein can be used to implement the various
aspects of this disclosure. Exemplary hardware that can be used for
the disclosed embodiments, configurations and aspects includes
computers, handheld devices, telephones (e.g., cellular, Internet
enabled, digital, analog, hybrids, and others), and other hardware
known in the art. Some of these devices include processors (e.g., a
single or multiple microprocessors), memory, nonvolatile storage,
input devices, and output devices. Furthermore, alternative
software implementations including, but not limited to, distributed
processing or component/object distributed processing, parallel
processing, or virtual machine processing can also be constructed
to implement the methods described herein.
[0188] In yet another embodiment, the disclosed methods may be
readily implemented in conjunction with software using object or
object-oriented software development environments that provide
portable source code that can be used on a variety of computer or
workstation platforms. Alternatively, the disclosed system may be
implemented partially or fully in hardware using standard logic
circuits or VLSI design. Whether software or hardware is used to
implement the systems in accordance with this disclosure is
dependent on the speed and/or efficiency requirements of the
system, the particular function, and the particular software or
hardware systems or microprocessor or microcomputer systems being
utilized.
[0189] In yet another embodiment, the disclosed methods may be
partially implemented in software that can be stored on a storage
medium, executed on programmed general-purpose computer with the
cooperation of a controller and memory, a special purpose computer,
a microprocessor, or the like. In these instances, the systems and
methods of this disclosure can be implemented as program embedded
on personal computer such as an applet, JAVA.RTM. or CGI script, as
a resource residing on a server or computer workstation, as a
routine embedded in a dedicated measurement system, system
component, or the like. The system can also be implemented by
physically incorporating the system and/or method into a software
and/or hardware system.
[0190] Although the present disclosure describes components and
functions implemented in the aspects, embodiments, and/or
configurations with reference to particular standards and
protocols, the aspects, embodiments, and/or configurations are not
limited to such standards and protocols. Other similar standards
and protocols not mentioned herein are in existence and are
considered to be included in the present disclosure. Moreover, the
standards and protocols mentioned herein and other similar
standards and protocols not mentioned herein are periodically
superseded by faster or more effective equivalents having
essentially the same functions. Such replacement standards and
protocols having the same functions are considered equivalents
included in the present disclosure.
[0191] The present disclosure, in various aspects, embodiments,
and/or configurations, includes components, methods, processes,
systems and/or apparatus substantially as depicted and described
herein, including various aspects, embodiments, configurations
embodiments, subcombinations, and/or subsets thereof. Those of
skill in the art will understand how to make and use the disclosed
aspects, embodiments, and/or configurations after understanding the
present disclosure. The present disclosure, in various aspects,
embodiments, and/or configurations, includes providing devices and
processes in the absence of items not depicted and/or described
herein or in various aspects, embodiments, and/or configurations
hereof, including in the absence of such items as may have been
used in previous devices or processes, e.g., for improving
performance, achieving ease and\or reducing cost of
implementation.
[0192] The foregoing discussion has been presented for purposes of
illustration and description. The foregoing is not intended to
limit the disclosure to the form or forms disclosed herein. In the
foregoing Detailed Description for example, various features of the
disclosure are grouped together in one or more aspects,
embodiments, and/or configurations for the purpose of streamlining
the disclosure. The features of the aspects, embodiments, and/or
configurations of the disclosure may be combined in alternate
aspects, embodiments, and/or configurations other than those
discussed above. This method of disclosure is not to be interpreted
as reflecting an intention that the claims require more features
than are expressly recited in each claim. Rather, as the following
claims reflect, inventive aspects lie in less than all features of
a single foregoing disclosed aspect, embodiment, and/or
configuration. Thus, the following claims are hereby incorporated
into this Detailed Description, with each claim standing on its own
as a separate preferred embodiment of the disclosure.
[0193] Moreover, though the description has included description of
one or more aspects, embodiments, and/or configurations and certain
variations and modifications, other variations, combinations, and
modifications are within the scope of the disclosure, e.g., as may
be within the skill and knowledge of those in the art, after
understanding the present disclosure. It is intended to obtain
rights which include alternative aspects, embodiments, and/or
configurations to the extent permitted, including alternate,
interchangeable and/or equivalent structures, functions, ranges or
steps to those claimed, whether or not such alternate,
interchangeable and/or equivalent structures, functions, ranges or
steps are disclosed herein, and without intending to publicly
dedicate any patentable subject matter.
* * * * *