U.S. patent application number 15/578938 was filed with the patent office on 2018-06-21 for method and apparatus for using gestures across multiple devices.
The applicant listed for this patent is Nureva Inc.. Invention is credited to Doug HILL, Taco VAN IEPEREN.
Application Number | 20180173373 15/578938 |
Document ID | / |
Family ID | 57502756 |
Filed Date | 2018-06-21 |
United States Patent
Application |
20180173373 |
Kind Code |
A1 |
HILL; Doug ; et al. |
June 21, 2018 |
METHOD AND APPARATUS FOR USING GESTURES ACROSS MULTIPLE DEVICES
Abstract
Method and apparatus for implementing gestures across user
interface display apparatuses, including detecting and saving, at a
first user interface display apparatus, an initial user input;
determining whether the initial user input is within a
predetermined proximity to a boundary with a second user interface
display apparatus; detecting and saving additional user input
continuing from the initial user input; when the initial user input
is within the predetermined proximity, incorporating additional
information from a transition message received within a
predetermined time period from the second user interface display
apparatus to the saved user input, the predetermined time period
corresponding to a message time between the first and second user
interface display apparatuses from a time of the initial user
input; and implementing the saved user input on one or more of the
first and second user interface display apparatuses.
Inventors: |
HILL; Doug; (Calgary,
CA) ; VAN IEPEREN; Taco; (Calgardy, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Nureva Inc. |
Calgary |
|
CA |
|
|
Family ID: |
57502756 |
Appl. No.: |
15/578938 |
Filed: |
June 10, 2016 |
PCT Filed: |
June 10, 2016 |
PCT NO: |
PCT/CA2016/050660 |
371 Date: |
December 1, 2017 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
62175029 |
Jun 12, 2015 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G09G 2354/00 20130101;
G06F 3/04845 20130101; G06F 3/0485 20130101; G06F 3/1446 20130101;
G06F 3/0481 20130101; G06F 3/04883 20130101 |
International
Class: |
G06F 3/0481 20060101
G06F003/0481; G06F 3/0488 20060101 G06F003/0488; G06F 3/0485
20060101 G06F003/0485; G06F 3/0484 20060101 G06F003/0484; G06F 3/14
20060101 G06F003/14 |
Claims
1. A method for implementing a user's single gesture across a
plurality of user interface display apparatuses, comprising:
detecting and saving, at a first user interface display apparatus,
an initial user single input; determining whether the initial user
single input s within a predetermined proximity to a boundary with
a second user interface display apparatus; detecting and saving
additional user input, which corresponds to the initial user single
input continuing from the initial user single input across the
boundary between the first and second user interface display
apparatuses; when the initial user single input is within the
predetermined proximity of the boundary with the second user
interface display apparatus, incorporating in the saved additional
user input a transition information message received within a
predetermined time period from the second user interface display
apparatus, the predetermined time period corresponding to a message
time between the first and second user interface display
apparatuses from a time of the initial user single input; and
implementing the saved additional user input on at least the second
user interface display apparatus.
2. The method of claim 1, further comprising: detecting an end to
the additional user input; determining whether the end to the
additional user input is within the predetermined proximity to the
boundary with the second user interface display apparatus; and when
the end to the additional user input is within the predetermined
proximity, transmitting a user input transition message to the
second user interface display apparatus.
3. The method of claim 1, further comprising: detecting an end to
the additional user input; determining whether the end to the
additional user input is within the predetermined proximity to an
additional boundary with a third user interface display apparatus;
and when the end to the additional user input is within the
predetermined proximity to the additional boundary with the third
user interface display apparatus, transmitting a user input
transition message to the third user interface display
apparatus.
4. The method of claim 3, wherein said implementing is performed on
one or more of the first, second, and third user interface display
apparatuses.
5. The method of claim 1, wherein said incorporating further
comprises updating a server with the saved additional user
input.
6. The method of claim 1, wherein the saved additional user input
corresponds to an identification and a move of one or more
displayed objects on one or more of the first and second user
interface display apparatuses.
7. The method of claim 1, wherein the saved additional user input
corresponds to scrolling a shared workspace or document that is
displayed on the first and second user interface display
apparatuses.
8. The method of claim 1, wherein the saved additional user input
corresponds to inking a shared document that is displayed on the
first and second user interface display apparatuses.
9. A program embodied in a non-transitory computer eadable medium
for implementing a user's single gesture across a plurality of user
interface display apparatuses, said program comprising instructions
to perform: detecting and saving, at a first user interface display
apparatus, an initial user single input; determining whether the
initial user single input is within a predetermined proximity to a
boundary with a second user interface display apparatus; detecting
and saving additional user input, which corresponds to the initial
user single input continuing from the initial user single input
across the boundary between the first and second user interface
display apparatuses; when the initial user single input is within
the predetermined proximity of the boundary with the second user
interface display apparatus, incorporating in the saved additional
user input a transition information message received within a
predetermined time period from the second user interface display
apparatus, the predetermined time period corresponding to a message
time between the first and second user interface display
apparatuses from a time of the initial user single input; and
implementing the saved additional user input on at least the second
user interface display apparatus.
10. The program of claim 9, further comprising instructions to
perform: detecting an end to the additional user input; determining
whether the end to the additional user input is within the
predetermined proximity to the boundary with the second user
interface display apparatus; and when the end to the additional
user input is within the predetermined proximity, transmitting a
user input transition message to the second user interface display
apparatus.
11. The program of claim 9, further comprising instructions to
perform: detecting an end to the additional user input; determining
whether the end to the additional user input is within the
predetermined proximity to an additional boundary with a third user
interface display apparatus; and when the end to the additional
user input is within the predetermined proximity to the additional
boundary with the third user interface display apparatus,
transmitting a user input transition message to the third user
interface display apparatus.
12. The program of claim 11, wherein said implementing is performed
on one or more of the first, second, and third user interface
display apparatuses.
13. The program of claim 9, wherein said incorporating further
comprises updating a server with the saved additional user
input.
14. The program of claim 9, wherein the saved additional user input
corresponds to an identification and a move of one or more
displayed objects on one or more of the first and second user
interface display apparatuses.
15. The program of claim 9, wherein the saved additional user input
corresponds to scrolling a shared workspace or document that is
displayed on the first and second user interface display
apparatuses.
16. The program of claim 9, wherein the saved additional user input
corresponds to inking a shared document that is displayed on the
first and second user interface display apparatuses.
17. A user interface display apparatus for implementing a user's
single gesture across a plurality of user interface display
apparatuses, comprising: a display apparatus; one or more user
interface apparatuses; memory; and one or more processors
configured to execute one or more programs stored on the memory,
said one or more programs comprising instructions to perform:
detecting, at the one or more user interface apparatuses, an
initial user single input; saving the initial user single input;
determining whether the initial user input is within a
predetermined proximity to a boundary of the display apparatus with
a second user interface display apparatus; detecting and saving
additional user input, which corresponds to the initial user single
input continuing from the initial user single input across the
boundary between the first and second user interface display
apparatuses; when the initial user single input is within the
predetermined proximity of the boundary with the second user
interface display apparatus, incorporating in the saved additional
user input a transition information message received within a
predetermined time period from the second user interface display
apparatus, the predetermined time period corresponding to a message
time between the user interface display apparatus and the second
user interface display apparatus from a time of the initial user
single input; and implementing the saved additional user input on
at least the second user interface display apparatus.
18. The user interface display apparatus of claim 17, wherein said
one or more programs further comprises instructions to perform:
detecting an end to the additional user input; determining whether
the end to the additional user input is within the predetermined
proximity to the boundary of the display apparatus with the second
user interface display apparatus; and when the end to the
additional user input is within the predetermined proximity,
transmitting a user input transition message to the second user
interface display apparatus.
19. The user interface display apparatus of claim 17, wherein said
one or more programs further comprises instructions to perform:
detecting an end to the additional user input; determining whether
the end to the additional user input is within the predetermined
proximity to another boundary of the display apparatus with a third
user interface display apparatus; and when the end to the
additional user input is within the predetermined proximity to the
additional boundary of the display apparatus with the third user
interface display apparatus, transmitting a user input transition
message to the third user interface display apparatus.
20. The user interface display apparatus of claim 19, wherein said
implementing is performed in cooperation with one or more of the
second and third user interface display apparatuses.
21. The user interface display apparatus of claim 17, wherein said
incorporating further comprises updating a server with the saved
additional user input.
22. The user interface display apparatus of claim 17, wherein the
saved additional user input corresponds to an identification and a
move of one or more displayed objects on one or more of the display
apparatus and the second user interface display apparatus.
23. The user interface display apparatus of claim 17, wherein the
saved additional user input corresponds to scrolling a shared
workspace or document that is displayed on the display apparatus
and the second user interface display apparatus.
24. The user interface display apparatus of claim 17, wherein the
saved additional user input corresponds to inking a shared document
that is displayed on the display apparatus and the second user
interface display apparatus.
25. The method of claim 1, wherein the first and second user
interface display apparatuses are separate devices which
communicate with each other through a server.
26. The program of claim 9, wherein the first and second user
interface display apparatuses are separate devices which
communicate with each other through a server.
27. The user interface display apparatus of claim 17, wherein the
first and second user interface display apparatuses are separate
devices which communicate with each other through a server.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is based upon and claims the benefit of
priority from the prior U.S. Provisional Application No.
62/175,029, filed on Jun. 12, 2015, the entire contents of which
are incorporated herein by reference.
BACKGROUND OF THE INVENTION
Field of the Invention
[0002] The present invention generally relates to a computer system
for managing user interfaces and displays across multiple devices.
And, more specifically, the present invention is directed to
methods and apparatuses for implementing user interface inputs that
span multiple devices.
Description of Related Art
[0003] Server-based software which allows multiple users to edit a
shared document has become common. This software can be executed on
touch displays that are physically adjacent. In some situations
(high performance, for example), it is preferable that each display
is controlled by a separate computer connected to the same shared
digital workspace, making independent display units. When these
displays are coordinated through the server, they can be made to
appear as if they are a single view of an electronic document or
object and of a shared contiguous background. One problem with this
setup is that gestures happen in real-time, but communication
between the displays is delayed by the message transfer time
through the server. And since touch events are in real-time, the
displays are not guaranteed to have the same version of the
document and, thus, the touch event may be processed incorrectly
between the displays. This is particularly problematic when the
user initiates a gesture on one display and continues that gesture
on the second display, because the second display may not have
enough information to continue with the action properly. For
example, when the user drags an object from one display to another,
the object should stay under the user's finger. However, when the
object encounters the boundary between the two displays, the first
display has the correct position of the object, but the second
display has a position that is a bit older. If the second display
immediately processes the drag, the object may not have arrived
yet, and the drag may be lost, or turned into an incorrect gesture.
The same problem exists when the user is scrolling the shared view
of the displays. A further problem is that the message delay as
well as limits to bandwidth can cause scrolling started on the
first display to look jerky and behind when it is mirrored on the
second display.
[0004] U.S. Pat. No. 8,330,733B2 describes touch-sensitive display
screens and interface software. The touch inputs and, in the case
of a multi-screen workspace, the interface software is operable to
allow inputs made in connection with a first screen to generate an
inertial movement of a displayed object which results in the object
moving to and coming to rest on another of the screens.
[0005] U.S. Patent Application No. 2011/0090155A1 describes a
method for use by a touch screen device that includes detecting a
first touch screen gesture at a first display surface of an
electronic device, detecting a second touch screen gesture at a
second display surface of the electronic device, and discerning
that the first touch screen gesture and the second touch screen
gesture are representative of a single command affecting a display
on the first and second display surfaces.
[0006] International Patent Application Publication No.
WO/2013046182A3 describes an apparatus comprising: a first display
area; a second display area; and an interface separating the first
display area from the second display area; and a display controller
configured to control display of a user interface element in a
first configuration when the user interface element is movable
across the interface from the first display area to the second
display area and a first criteria dependent upon a distance of the
user interface element from the interface is satisfied and is
configured to control display of the user interface element in a
second configuration, different to the first configuration, when
the user interface element is movable across the interface from the
first display area to the second display area and the first
criteria concerning a distance of the user interface element from
the interface is not satisfied.
[0007] U.S. Pat. No. 8,751,970B2 describes embodiments of a
multi-screen synchronous slide gesture. In various embodiments, a
first motion input is recognized at a first screen of a
multi-screen system, and the first motion input is recognized when
moving in a particular direction across the first screen. A second
motion input is recognized at a second screen of the multi-screen
system, where the second motion input is recognized when moving in
the particular direction across the second screen and approximately
when the first motion input is recognized. A synchronous slide
gesture can then be determined from the recognized first and second
motion inputs.
[0008] U.S. Pat. No. 6,331,840B1 describe an apparatus and process
where an object can be manipulated between multiple discontinuous
screens from the others, this continuity being non-touch sensitive.
First, a pointing implement contacts the source touch-screen to
select the object, storing parameters in the computers buffer. The
pointing implement is moved to the target touch-screen where the
pointing implement contacts where the first object is to be dragged
to; then the object is released from the buffer so that the first
object is pasted to the target touch-screen. Preferably, when the
object is touched at the source screen, a timer starts, and if the
target screen is touched before timeout, the object appears at the
target.
SUMMARY OF THE INVENTION
[0009] A system for handling gestures that cross between two or
more displays which are physically adjacent. The system may include
a computer for controlling the displays units, two or more displays
which are touch enabled, a communications interface for
communicating between display units and the server, and a server
for storing the digital workspace background, shared objects, and
ink data for remote device synchronization.
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] FIG. 1 is an illustration of a system having a shared
digital workspace with a contiguous digital background and shared
objects according to an embodiment of the present invention.
[0011] FIGS. 2a(i) and 2a(ii) are detailed illustrations of
dragging an object in the shared digital workspace across multiple
display units in the system. FIGS. 2b(i) and 2b (ii) are detailed
illustrations of scrolling the shared digital workspace across
multiple display units in the system. And FIG. 2c is a detail
illustration of inking on the shared digital workspace across
multiple display units in the system.
[0012] FIG. 3 is a detailed flowchart of dragging an object in the
shared digital workspace across multiple display units in the
system.
[0013] FIG. 4 is a detailed flowchart of inking on the shared
digital workspace across multiple display units in the system.
[0014] FIG. 5 is detailed flowchart of scrolling the shared digital
workspace across multiple display units in the system.
DETAILED DESCRIPTION OF THE PRESENTLY PREFERRED EXEMPLARY
EMBODIMENTS
[0015] With reference to the drawings, a non-limiting illustrative
embodiment will now be described. While the description below is
given with respect to digital workspace and object sharing between
two equally-sized, physically adjacent touch-sensitive displays,
other combinations of devices may be used without departing from
the spirit or scope of the attached claims. For example, the two
displays need not be physically adjacent. Furthermore, three, four,
five, six, nine or more displays may share the digital workspace.
The server allows for a plurality of remote locations and devices
to share the same digital workspace and objects for a contiguous
and consistent experience and presentation. The devices may be any
combination of large-screen display(s), monitor(s), laptop
display(s), pad display(s), cellphone display(s), and the like.
[0016] Conventional art may represent a system having a digital
workspace stored on a server and accessed from a variety of
devices. With reference to FIG. 1, system 100 illustrates a system
having a shared digital workspace, where the shared digital
workspace stored on a server 110, which server may be a cloud
server, a host, or the like. Furthermore, system 100 may include
display units 130(a) and 130(b) where each display unit consists of
a computer (110(a) and 110(b)) and a touch display (121 and 122).
Users of system 100, may interface with each display unit 130(a)
and 130(b) by providing inputs such as touch gestures on the
display 121 and 122.
[0017] Multiple objects (24, 26, 27, and 28) may be contained in
the shared digital workspace on the server 110 where each object
has an X and a Y coordinate that describes its position in the
digital workspace.
[0018] As shown in FIG. 1, two display units 130a and 130b may be
coordinated so that the display (121 and 122) are physically
adjacent. Scrolling information may be contained in the shared
digital workspace on the server 110 where the scrolling information
comprises X and Y coordinates. Each display unit (130a, 130b) may
show a portion of the shared digital workspace based on this
scrolling information. The leftmost display unit 130a may show, for
example, a rectangular subset of a shared document that starts at
the scrolling X and Y location, and extends rightward by the width
of the display. The next display unit (130b) may start at the same
Y location, but its X location may begin where the display to its
left (130a) finished, creating a contiguous view of the shared
digital workspace and document.
[0019] Changes to the shared digital workspace (or document) may be
communicated as network messages to the server 110. Some messages
may bypass the server and be sent directly between display
units.
[0020] Each display unit (130a and 130b) in the system supports
touches from its attached display (121 and 122) using a stylus,
finger, etc. Such contacts generally comprise down, move, and up
(and sometimes leave and enter) events. Touches are local to one
display. When a touch moves outside the display boundary an up or a
leave event is generated. When a touch enters from outside the
display boundary, a down or enter event is generated.
[0021] Many possible gestures may be input by a user(s) onto the
display units (130a, 130b) in the system. Gestures may include, but
are not limited to dragging, inking, and scrolling. These gestures
may cross from one display unit 130a, across the boundary 123, and
onto an adjacent display unit 130b.
[0022] The present digital workspace allows the users to drag
objects (e.g., 24, 26, 27, and 28) in a digital workspace. FIGS.
2a(i) and (ii) show the user 22 dragging an object 24 across the
intersection 123 of the two display units 130a and 130b where the
display units are physically adjacent and share digital workspace
information so that they appear contiguous. In FIGS. 2a(i) and
(ii), note that the other objects 26, 27, and 28 on the displays
remain in their original positions during and after the dragging
operation. FIGS. 2b(i) and (ii) illustrate the user 22 scrolling a
background of the digital workspace or a shared document, thus
objects 24, 29, and 30 move together with the background as the
user 22 scrolls the digital workspace (or document). In FIG. 2c,
the user 22 draws (inks) a line 34 across the display units 130a
and 130b.
[0023] FIG. 3 depicts a process, which may be embodied in
instructions of a software program on the system, for the dragging
of an object across the two physically adjacent display units in
the system, and is described hereinafter. The program of computer
instructions may be stored in one or more computer-readable
medium(s) stored in combination of one or more of server 110, first
computer 110a, and second computer 110b. When executed by one or
more processors using ROM and RAM in the one or more of server 110,
first computer 110a, and second computer 110b, those processors
cause the actions described in FIG. 3.
[0024] In step S200, a touch down, TOUCH1 consisting of X and Y
coordinates is detected, e.g., on the display 121. In step S205, it
is determined whether TOUCH1 is close to the adjacent display 122.
If the touch is not close, the process goes to step S240.
Otherwise, the touch may be the continuation of a drag that started
on display 122. So at step S210, the program stores TOUCH1, and
initializes a countdown timer TIMER, where the length TIMER is
preferably determined by the time it takes to send a message from
computer 110b to computer 110a. In step S215, subsequent moves
TOUCH2, TOUCH3 . . . TOUCHN are saved. At step S220 if no drag
handover message message (DHM) was received from the adjacent
display unit 122, the process goes to step S235. DHM may be in the
form DHM: (Object=ID, TouchPosition=X,Y, ObjectPosition=X,Y). At
step S225, the program compares TouchPosition in DHM to TOUCH1. If
the distance between these is too large, the touches are unrelated
and the program jumps to S235.
[0025] In step S230, the information in DHM is used to set the
object being dragged (OBJ1), the touch position TP, and the object
position OP. From here, the program jumps to S250. In step S235, it
is determine if TIMER has expired. If it has expired, then the
stored touches TOUCH1-TOUCHN were not a continuation of a drag on
adjacent display unit 130b. In this case, the program jumps to step
S240. Otherwise the program goes back to step S215. At step S240,
the program determines if TOUCH1 is on an object OBJ. If it is on
an object, then the program proceeds to step S245. Otherwise, the
program jumps to step S270. In step S245, the touch position TP
(consisting of the X and Y coordinates of TOUCH1), the object
position OP (consisting of the X and Y position of OBJ) and the
object being dragged OBJ are all stored. At step S250, the current
touch position is obtained. TP is updated using the X and Y of the
current position. OP is updated by moving it the distance between
the old and new TP, and OBJ is moved to the new OP causing the
object to move to the new touch position. The updated position of
OBJ is sent to the server 110 and (either directly or via the
server) to display unit 130b.
[0026] In step S255, the program checks if the drag is completed by
looking for an up or leave event. If the drag is not finished the
program goes to step S250. If the drag is finished, then at step
S260, the program checks if the final location of the drag is close
to the adjacent display 122. If this is not the case, the program
jumps to S270. If the drag is close to the adjacent display 122,
then it is possible that the user will continue the drag onto the
adjacent display unit 130b. So in step S265, the program sends a
drag handover message DHM (which contains the last known touch
position TP and object position OP and the ID of the object being
dragged OBJ). The message is sent to the adjacent display unit
130b, either directly or via the server. At step S270, the drag
sequence is finished. The program restarts each time a new touch
down event is encountered.
[0027] In the present invention, a shared document allows the users
to ink in the shared document, where ink strokes may be considered
as a series of connected segments created by at least two touch
points. FIG. 2c illustrates a user 22 drawing an ink stroke 34
across the boundary 123 between the adjacent displays 121 and 122.
FIG. 4 depicts a process, which may be embodied in instructions of
software on the system, for inking on a shared document across two
physically adjacent display units. It is described hereinafter from
the perspective of computer 110a on display unit 130a.
[0028] In step S300, a touch down TOUCH1 is detected on display
121. In step S305, it is determined whether TOUCH1 is close to the
adjacent display 122. If the touch is not close, go to step S350
through step S308 where an empty ink stroke is initiated.
Otherwise, the touch may be the continuation of ink that started on
display 122. So in step S310, the program stores TOUCH1 and
initializes a countdown timer TIMER, where the length of TIMER is
preferably determined by the time it takes to send a message from
computer 110b to computer 110a. In S315, subsequent moves TOUCH2,
TOUCH3 . . . TOUCHN are saved. At step S320, if no handover ink
message (HIM) was received from the adjacent display unit 122, go
to step S335. HIM contains (INK1ID=1234. Color=color, Width=width,
InkPoints {Ink1=x1,y1 Ink2=x2,y2 . . . InkN=xn,yn}). At step S325,
the program compares the position of the final ink point InkN in
HIM with TOUCH1. If the distance between these is too large, the
ink strokes are unrelated and the program jumps to S335.
[0029] At step S330, the electronic ink stroke INK1 is retrieved
from the handover ink message HIM. The stored touch points
TOUCH1-TOUCHN are added to the tail of the ink (Ink1-InkN) and the
server 110 is updated with the new ink. The program then jumps to
step S350. In step S335, the program determines if TIMER has
expired. If it has expired, then the stored touches TOUCH1-TOUCHN
were not a continuation of ink from adjacent display unit 130b. In
this case the program jumps to S340. Otherwise, the program goes
back to step S315. At step S340, the program creates a new ink
stroke INK1 out of the TOUCH1-TOUCHN coordinates. In step S350, the
current touch location is added to INK1, and the server 110 is
updated accordingly.
[0030] At step S355, the program checks if the inking is finished
by looking for an up or leave event. If the inking is not finished,
the program goes to S350. If the inking is finished, then at step
S360, the program checks if the ink ended close to the adjacent
display 122. If this is not the case, then the program jumps to
S370. If the ink ended close to the adjacent display, it is
possible that the user will continue ink onto the adjacent display
unit 130b. So, at step S365, the program sends a handover ink
message HIM, which contains an ink stroke (Ink1=TOUCH1,
Ink2=TOUCH2, . . . InkN=TOUCHN) and data about the color and size
of the ink stroke. At step S370, the inking sequence is finished.
The program restarts each time a new touch down event is
encountered.
[0031] In the present invention, a shared workspace or document can
be scrolled in a coordinate manner on the display units so that it
presents a contiguous view to the user. FIGS. 2b(i) and 2b(ii)
illustrate a user 22 scrolling across the boundary 123 between the
adjacent displays (121 and 122). FIG. 5 depicts a process, which
may be embodied in instructions of software on the system, for
scrolling on a shared workspace or document across two physically
adjacent display units. It is described hereinafter from the
perspective of computer 110a on display unit 130a.
[0032] In step S400, a touch down TOUCH1 is detected on display
121. In step S405, it is determined whether TOUCH1 is close to the
adjacent display 122. If the touch is not close, go to step S440.
Otherwise, the touch may be the continuation of scrolling that
started on display 122. So at step S410, the program stores TOUCH1
(as a raw screen coordinate) and initializes a countdown timer
TIMER, where the length of the timer is preferably determined by
the time it takes to send a message from computer 110b to computer
110a. Raw screen coordinates are used because scrolling changes the
coordinate system relative to the shared workspace. At step S415,
subsequent moves TOUCH2, TOUCH3 . . . TOUCHN are saved. At step
S420, if no handover scroll message (HSM) is received from the
adjacent display unit 122, go to step S435. HSM may be of the form
(ScrollFingerLocation=x,y RawScreenLocation=x,y). At step S425, the
program compares the position of RawScreenLocation with TOUCH1 and
determines if TOUCH1 is a continuation of the same scroll. If it is
not, then the scroll events are unrelated and the program jumps to
S435.
[0033] At step S430, the ScrollFingerLocation (SFL) is retrieved
from the handover scroll message HSM. The scroll position of the
shared workspace or document on the display unit is adjusted so
that the current touch is over the same point on the shared
workspace or document as it was when the HSM was generated. The
program then jumps to step S450. At step S435, the program
determines if the countdown timer TIMER has expired. If it has,
then the program knows that the stored touches TOUCH1-TOUCHN were
not a continuation of a scroll from adjacent display 122. In this
case, the program jumps to step S440. Otherwise, the program go
back to step S415. In step S440, the program determines if TOUCH1
is the start of a scrolling gesture. In this embodiment, a
scrolling gesture is a touch on the background of the shared
workspace or document. If it is not, the program is jumps to step
S470. If it is a scroll start, then step at S445, the scroll finger
location SFL is calculated. The scroll finger location is the
location of the touch in shared workspace/document coordinates. At
step S450, the current touch location is compared to the SFL, and
the shared workspace or document is scrolled so that the current
touch location and SFL match. The current position of the scrolling
is updated on the server 110.
[0034] At step S455, the program checks to see if the scroll is
finished by looking for an up or leave event. If the scroll is not
finished, the program goes back to step S450. If the scroll is
finished then at step S460, the program checks to see if the scroll
finished close to neighboring display 122. In this case, it is
possible that the user may be continuing the scroll on the
neighboring display 122, and the program jumps to step S465.
Otherwise, the program goes to step S470. In step S465, a handover
scroll message (HSM) is sent to the neighboring display unit 130b.
This message contains SFL as well as the location on the screen
where the scroll finished. At step S470, the program
terminates.
[0035] According to an embodiment of the invention, system 100 may
incorporate peer-to-peer servers in cooperation with multiple
displays for implementing a shared digital workspace with a
contiguous background. And display devices can be touch screen
monitors of any size, wall projection displays, display tables, and
mobile devices, such as phones and portable computers (e.g., iPads)
that are used to present a contiguous digital background workspace.
In addition, the touch gestures and corresponding events of the
present invention may be implemented for proximity or
movement-based gestures, such as "hover" gestures, VR/AR (virtual
reality/augmented reality) user interfaces, and the like.
[0036] The present invention is disclosed herein in terms of
preferred embodiments thereof, which provides method and apparatus
for using gestures across multiple display devices, as defined in
the appended claims. Various changes, modifications, and
alterations in the teachings of the present invention may be
contemplated by those skilled in the art without departing from the
intended spirit and scope of the appended claims. It is intended
that the present invention encompass such changes and
modifications.
* * * * *