U.S. patent application number 14/739691 was filed with the patent office on 2016-05-26 for system and method for listening to teams in a race event.
The applicant listed for this patent is TAGI Ventures, LLC. Invention is credited to Steven M. Koehler, Eric K. Moe.
Application Number | 20160150293 14/739691 |
Document ID | / |
Family ID | 38041615 |
Filed Date | 2016-05-26 |
United States Patent
Application |
20160150293 |
Kind Code |
A2 |
Koehler; Steven M. ; et
al. |
May 26, 2016 |
SYSTEM AND METHOD FOR LISTENING TO TEAMS IN A RACE EVENT
Abstract
A computer-implemented method and system allows a remote
computer user to listen to teams in a race event. The method
includes receiving audio signals from a plurality of audio sources
at the race event; transmitting at least some of the audio signals
to a remote computer; and filtering the audio signals as a function
of the source of at least some of the audio signals so that at
least some of the audio signals are not played by the remote
computer and heard by the user.
Inventors: |
Koehler; Steven M.; (Orono,
MN) ; Moe; Eric K.; (West Malling, UK) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
TAGI Ventures, LLC |
Orono |
MN |
US |
|
|
Prior
Publication: |
|
Document Identifier |
Publication Date |
|
US 20150304739 A1 |
October 22, 2015 |
|
|
Family ID: |
38041615 |
Appl. No.: |
14/739691 |
Filed: |
June 15, 2015 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
13400245 |
Feb 20, 2012 |
|
|
|
14739691 |
|
|
|
|
11620967 |
Jan 8, 2007 |
|
|
|
13400245 |
|
|
|
|
10060800 |
Jan 30, 2002 |
|
|
|
11620967 |
|
|
|
|
09128896 |
Aug 4, 1998 |
|
|
|
10060800 |
|
|
|
|
60075659 |
Feb 23, 1998 |
|
|
|
Current U.S.
Class: |
725/32 |
Current CPC
Class: |
H04N 21/4852 20130101;
H04N 21/233 20130101; H04N 21/8106 20130101; H04N 21/4854 20130101;
H04H 20/04 20130101; H04L 65/608 20130101; H04M 3/566 20130101;
H04L 65/4038 20130101; H04L 12/1822 20130101; H04N 21/2187
20130101; H04N 21/21805 20130101; H04N 21/439 20130101; H04M 3/56
20130101; H04L 67/1095 20130101 |
International
Class: |
H04N 21/81 20060101
H04N021/81; H04N 21/485 20060101 H04N021/485; H04N 21/233 20060101
H04N021/233; H04N 21/439 20060101 H04N021/439; H04L 29/06 20060101
H04L029/06; H04L 29/08 20060101 H04L029/08 |
Claims
1. A computer implemented method to provide audio and video from a
televised vehicle race event using a computer network to a
plurality of remote computing devices remote from the vehicle race
event, the computer implemented method comprising: providing a
server having a processor operable to enable the server to
communicate via the computer network to each of the remote
computing devices and to access a storage device having audio data
from the televised vehicle race event; providing a user interface
for a single monitor operable on each of the plurality of remote
computing devices, the user interface configured to render a visual
indication identifying audio sources of a first plurality of audio
sources apart from other audio sources that have audio that can be
rendered at each of the plurality of remote computing devices,
wherein a number of audio sources in the first plurality of audio
sources is less than a total number of audio sources at the vehicle
race event having audio that can be rendered at each of the
plurality of remote computing devices; during the televised vehicle
race event and separate from a television signal of the televised
vehicle race event: transmitting with the server to each of the
remote computing devices through the computer network, first data
packets in one or more streams containing first data representing
audio signals from audio sources and video signals from cameras at
the vehicle race event, the first data packets constructed so that
each remote computing device selectively renders a plurality of
camera views occurring at the same time in the televised vehicle
race simultaneously with audio from only one of the audio sources
of the first plurality of audio sources when audio from at least
two of the audio sources of the first plurality of audio sources
occurs at the same time; after transmitting from the server to each
remote computing device the first data packets constructed so that
each remote computing device renders the plurality of camera views
occurring at the same time in the vehicle race event simultaneously
with audio from only one of the audio sources of the first
plurality of audio sources when audio from at least two of the
audio sources of the first plurality of audio sources occurs at the
same time, receiving with the server an indication from at least
one remote computing device to transmit audio of an audio source
that occurred earlier in the vehicle race event; and after
receiving the indication from the at least one remote computing
device, transmitting with the server to the at least one remote
computing device, second data packets containing second data
representing the audio corresponding to the indication from the at
least one remote computing device.
2. (canceled)
3. (canceled)
4. (canceled)
5. (canceled)
6. (canceled)
7. (canceled)
8. (canceled)
9. (canceled)
10. (canceled)
11. (canceled)
12. (canceled)
13. (canceled)
14. (canceled)
15. (canceled)
16. (canceled)
17. (canceled)
18. (canceled)
19. (canceled)
20. (canceled)
21. (canceled)
22. (canceled)
23. (canceled)
24. (canceled)
25. (canceled)
26. A computing system to provide audio and video associated with
teams during a televised vehicle race event via a computer network
to a plurality of remote computing devices remote from the vehicle
race event, the computing system comprising: a storage device
configured to store data of audio from each of a plurality of audio
sources from the vehicle race event; and a server having a
processing module operably connected to the storage device to
receive the data and to the computer network, the processing module
configured to during the televised vehicle race event and separate
from a television signal of the televised vehicle race event:
transmit to each remote computing device through the computer
network first data packets in one or more streams containing first
data representing the audio signals from the audio sources and the
video signals from the cameras at the vehicle race event, the first
data packets constructed so that each remote computing device
selectively renders a plurality of camera views occurring at the
same time in the vehicle race event simultaneously with audio from
one of the audio sources of a first plurality of audio sources
having audio that can be rendered at each of the plurality of
remote computing devices when audio from at least two of the audio
sources of the first plurality of audio sources occurs at the same
time, wherein a number of audio sources in the first plurality of
audio sources is less than a total number of audio sources at the
vehicle race event having audio that can be rendered at each of the
plurality of remote computing devices; after transmitting to each
of the remote computing devices the first data packets constructed
so that each remote computing device selectively renders the
plurality of camera views occurring at the same time in the race
simultaneously with audio from one of the audio sources of the
first plurality of audio sources when audio from at least two of
the audio sources of the first plurality of audio sources occurs at
the same time, receive an indication from at least one remote
computing device to transmit audio of an audio source that occurred
earlier in the vehicle race event; and after receiving the
indication from the at least one remote computing device, access
the storage device and transmit to the at least one remote
computing device, second data packets containing second data
representing the audio corresponding to the indication from the at
least one remote computing device.
27. (canceled)
28. (canceled)
29. (canceled)
30. (canceled)
31. The computing device of claim 26 wherein the processing module
is further configured to transmit to each remote computing device
through the computer network, the first data packets, or third data
packets or combinations thereof containing third data representing
audio signals from audio sources and video signals from cameras at
the vehicle race event constructed so that each remote computing
device selectively renders a different plurality of camera views
occurring at the same time in the vehicle race simultaneously with
audio from one of the audio sources of a second plurality of audio
sources when audio from at least two of the audio sources of the
second plurality of audio sources occurs at the same time, wherein
a number of audio sources in the second plurality of audio sources
having audio that can be rendered at each of the plurality of
remote computing devices is less than the total number of audio
sources at the vehicle race event having audio that can be rendered
at each of the plurality of remote computing devices.
32. The computing device of claim 26 wherein the processing module
is further configured to receive a second indication from each of
the remote computing devices to transmit audio of one of the audio
sources that occurred earlier in the vehicle race event, and
configured to transmit to each of the remote computing devices,
second data packets for each respective remote computing device
containing data representing the audio corresponding to each second
indication from each respective remote computing device.
33. The computer implemented method of claim 1 wherein the user
interface is configured to render a second visual indication apart
from the first visual indication, the second visual indication
indicating audio sources of a second plurality of audio sources
having audio that can be rendered at each of the plurality of
remote computing devices, wherein a number of audio sources in the
second plurality of audio sources is less than the total number of
audio sources at the vehicle race event having audio that can be
rendered at each of the plurality of remote computing devices, and
wherein the computer implemented method further comprises
transmitting, from the server to each remote computing device
through the computer network, the first data packets or third data
packets, or combinations thereof containing third data representing
audio signals from audio sources and video signals from cameras at
the vehicle race event constructed so that each remote computing
device selectively renders a different plurality of camera views
occurring at the same time in the vehicle race simultaneously with
audio from only one of the audio sources of the second plurality of
audio sources when audio from at least two of the audio sources of
the second plurality of audio sources occurs at the same time.
34. The computer implemented method of claim 33 wherein receiving
at the server the indication from the at least one remote computing
device comprises receiving at the server an indication from each of
the remote computing devices to transmit audio of one of the audio
sources that occurred earlier in the vehicle race event, and
wherein transmitting from the server to the at least one remote
computing device comprises transmitting from the server to each of
the remote computing devices, second data packets for each
respective remote computing device containing data representing the
audio corresponding to each indication from each respective remote
computing device.
35. The computer implemented method of claim 34 wherein at least
two of the audio sources of the first plurality of audio sources
are associated with two different teams in the vehicle race
event.
36. The computer implemented method of claim 35 wherein at least
two of the audio sources of the second plurality of audio sources
are associated with two different teams in the vehicle race event,
the at least two audio sources of the second plurality of audio
sources being different than the at least two audio sources of the
first plurality of audio sources.
37. A computing device for rendering audio and video of a vehicle
race event, the computing device comprising: a speaker; a single
monitor; a user interface; and a processor operably coupled to the
speaker, the single monitor and the user interface, the processor
configured to: receive from a network data packets representative
of audio from audio sources and video from cameras at the vehicle
race event; visually identify on the single monitor audio sources
of a first plurality of audio sources at the vehicle race event
having audio that can be rendered through the speaker, a number of
audio sources in the first plurality of audio sources being less
than a total number of audio sources at the vehicle race event
having audio that can be rendered through the speaker; at least
during portions of the vehicle race event as the vehicle race event
is occurring live: receive through the user interface an indication
of a selected audio source that the user of the computing device
wants to hear over any other audio source at the vehicle race
event; audibly render through the speaker the selected audio source
that the user of the computing device wants to hear over any other
audio source at the vehicle race event; receive through the user
interface inputs from a user of the computing device to selectively
render on the single monitor only one of any of the camera views
and to selectively render on the single monitor a plurality of
camera views of the vehicle race event occurring at the same time
during the vehicle race event; simultaneously visually render the
plurality of camera views of the vehicle race event on the single
monitor occurring at the same time during the vehicle race event
while also audibly rendering through the speaker only one audio
source of the first plurality of audio sources when audio occurs
simultaneously during the vehicle race event for at least two audio
sources of the first plurality of audio sources; after audibly
rendering the only one audio source of the first plurality of audio
sources or the audio of the selected audio source that the user of
the computing device wants to hear over any other audio source at
the vehicle race event, receive through the user interface a second
indication of an audio source having audio that occurred earlier in
the vehicle race event; and audibly render through the speaker
audio corresponding to the second indication that occurred earlier
in the vehicle race event.
38. The computing device of claim 37 wherein the only one audio
source of the first plurality of audio sources is audio associated
with a team, and wherein the plurality of camera views are views
from cameras in separate vehicles of teams other than said
team.
39. The computing device of claim 37 wherein the data packets
representative of audio from the audio sources and video from the
cameras at the vehicle race event are separate from a television
signal of the vehicle race event.
40. The computing device of claim 37 wherein the processor is
further configured to visually identify on the single monitor other
audio sources having audio that can be rendered through the speaker
that are different than the audio sources of the first plurality of
audio sources.
41. The computing device of claim 40 wherein the other audio
sources that are different than the audio sources of the first
plurality of audio sources comprise audio sources of a second
plurality of audio sources having audio that can be rendered
through the speaker, a number of audio sources in the second
plurality of audio sources being less than the total number of
audio sources at the vehicle race event having audio that can be
rendered at the computing device.
42. The computing device of claim 41 wherein the processor is
further configured to visually identify on the single monitor the
audio sources of the first plurality of audio sources apart from
the audio sources of the second plurality of audio sources.
43. The computing device of claim 42 wherein the processor is
further configured to simultaneously visually identify on the
single monitor with a first visual indication the audio sources of
the first plurality of audio sources and with a second visual
indication the audio sources of the second plurality of audio
sources.
44. The computing device of claim 37 wherein the plurality of
camera views of the vehicle race event occurring at the same time
comprises a view of the vehicle race event as seen by a camera in a
vehicle and a view of the vehicle race event as seen by a camera
not in a vehicle.
45. The computing device of claim 37 and wherein the processor is
further configured to visually render on the single monitor an
indication of time when the audio corresponding to the second
indication occurred during the vehicle race event.
46. The computing device of claim 37 and wherein the processor is
further configured to visually render performance of a team
relative to a leader of the vehicle race event.
47. The computing device of claim 37 and wherein the processor is
further configured to receive through the user interface a third
indication to enlarge one of the camera views of the vehicle race
event as seen by a vehicle in the vehicle race event to be the only
camera view on the single monitor and then visually render said one
of the camera views as the only camera view on the single
monitor.
48. The computing device of claim 47 and wherein the processor is
further configured to: render a first visual indication on the
single monitor indicating audio sources of the first plurality of
audio sources and a second visual indication apart from the first
visual indication, the second visual indication indicating audio
sources of a second plurality of audio sources, different than the
first plurality of audio sources, having audio that can be
rendered; receive through the user interface a fourth indication to
render a different plurality of camera views occurring at the same
time in the vehicle race; and render the different plurality of
camera views occurring at the same time in the vehicle race
simultaneously with audio from only one of the audio sources of the
second plurality of audio sources when audio from at least two of
the audio sources of the second plurality of audio sources occurs
at the same time.
49. A computing device for rendering audio and video of a vehicle
race event, the computing device comprising: a speaker; a single
monitor; a user interface; and a processor operably coupled to the
speaker, the single monitor and the user interface, the processor
configured to: receive through a computer network data packets
containing data representing audio from a plurality of audio
sources and video from a plurality of cameras views at the vehicle
race event; and at least during portions of the vehicle race event
as the vehicle race event is occurring live: receive through the
user interface inputs from a user of the computing device to
selectively render on the single monitor only one of any of the
plurality of camera views and to selectively render on the single
monitor a plurality of camera views of the vehicle race event
occurring at the same time during the vehicle race event; and when
visually rendering the plurality of camera views of the vehicle
race event occurring at the same time during the vehicle race event
also audibly render through the speaker only one audio source from
the plurality of audio sources.
50. The computing device of claim 49 wherein the plurality of
camera views of the vehicle race event occurring at the same time
during the vehicle race event comprise at least two of camera views
that do not correspond to a view of the vehicle race event as seen
by a vehicle corresponding to the only one audio source.
51. The computing device of claim 49 wherein the processor is
further configured to receive through the user interface an
indication of a selected audio source that the user of the
computing device wants to hear over any other audio source at the
vehicle race event, the selected audio source being the only one
audio source being rendered when visually rendering the plurality
of camera views of the vehicle race event occurring at the same
time during the vehicle race event.
52. The computing device of claim 51 wherein the selected audio
source comprises commentary of the vehicle race event.
53. The computing device of claim 51 wherein the selected audio
source comprises audio of track officials of the vehicle race
event.
54. The computing device of claim 51 wherein the selected audio
source comprises audio communications of a team during the vehicle
race event.
55. The computing device of claim 51 wherein the processor is
further configured to: receive through the user interface a second
indication of a different audio source that is to be the selected
audio source that a user of the computing device wants to hear over
any other audio source at the vehicle race event; and receive
through the user interface a third indication of a different camera
view to render, the second indication being separate from the
second indication.
56. The computing device of claim 49 wherein the processor is
further configured to receive through the user interface an
indication to enlarge on the single monitor one of the camera views
of the vehicle race event as seen by a vehicle in the vehicle race
event so as to be the only camera view on the single monitor and
then visually render said one of the camera views as the only
camera view on the single monitor.
57. The computing device of claim 49 wherein the processor is
further configured to render a visual indication on the single
monitor identifying teams of a first plurality of teams apart from
other teams at the vehicle race event, wherein a number of teams in
the first plurality of teams is less than a total number of teams
at the vehicle race event, and wherein the data packets containing
data representing video from the plurality of cameras views at the
vehicle race event comprise data representing camera views as seen
from vehicles of teams of the first plurality of teams.
58. The computing device of claim 57 wherein the processor is
further configured to render a second visual indication identifying
teams of a second plurality of teams apart from other teams at the
vehicle race event, wherein a number of teams in the second
plurality of teams is less than a total number of teams at the
vehicle race event, and wherein the data packets containing data
representing video from the plurality of cameras views at the
vehicle race event comprise data representing camera views as seen
from vehicles of teams of the second plurality of teams, and
wherein the first plurality of teams is different than the teams of
the second plurality of teams.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation of application Ser. No.
13,400,245, filed Feb. 20, 2012, now U.S. Pat. No. 9,059,809, which
is a continuation of application Ser. No. 11/620,967, filed Jan. 8,
2007, now U.S. Pat. No. 8,127,037, which is a continuation of
application Ser. No. 10/060,800, filed Jan. 30, 2002, now U.S. Pat.
No. 7,162,532, which is a continuation-in-part application and
claims the priority of Ser. No. 09/128,896, filed Aug. 4, 1998,
which claims the benefit of provisional patent application Ser. No.
60/075,659, filed Feb. 23, 1998, all of which are incorporated
herein by reference.
BACKGROUND OF THE INVENTION
[0002] The present invention relates to race events. More
particularly, the present invention allows a person to listen to
communications of race teams participating in a race event where
the person is remote from the race event.
[0003] Race events, such as motor vehicle racing, are a steadily
growing sport. In many forms of racing, a driver communicates with
a crew during the race to discuss strategy and vehicle performance.
The communications are commonly listened to by fans at the race
event, allowing the fans to become more involved during the race.
However, the transmitters used are not powerful and are generally
limited in range so as to function within the area of the race
track. Thus, only those fans at the race have the ability to listen
to the race communications. For fans watching the race remotely,
such as on television, the communications are not generally
available except for occasional excerpts provided by the race
broadcaster.
SUMMARY OF THE INVENTION
[0004] A computer-implemented method and system allows a remote
computer user to listen to teams in a race event. The method
includes receiving audio signals from a plurality of audio sources
at the race event; transmitting at least some of the audio signals
to a remote computer; and filtering the audio signals as a function
of the source of at least some of the audio signals so that at
least some of the audio signals are not played by the remote
computer and heard by the user.
BRIEF DESCRIPTION OF THE DRAWINGS
[0005] FIG. 1 is a pictorial representation of a race event and a
system of the present invention for monitoring race communications
and providing the communications to a remote fan.
[0006] FIG. 2 is a user interface for selecting which race
communications to listen to.
[0007] FIG. 3 is a schematic diagram illustrating a channel of
communication for one team.
[0008] FIG. 4 is a monitor illustrating a form of user interfaces
for remotely viewing and listening to a race.
[0009] FIG. 5 is a block diagram of a server.
[0010] FIG. 6 is a representative view of a data packet.
[0011] FIG. 7 is a block diagram of remote computer.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0012] A race event is illustrated in FIG. 1 at 10. In the
embodiment illustrated, the race event 10 is a motor vehicle race
involving a plurality of cars at 12, 13, 14, 15, 16, 17 and 18. The
cars race on an oval track 26. The track 26 includes a pit area 28
used for periodic refueling and maintenance of the cars 12-18
during the race.
[0013] During the race, the driver of each of the cars 12-18 are in
communication with team members located in the pit area 28. The
drivers of the cars 12-18 discuss race strategy including when to
perform refueling and maintenance on the car during the race.
Generally, each team is assigned a particular channel or operating
frequency to the exclusion of all other teams so that the driver
and the team can communicate easily. In the embodiment illustrated,
the driver of car 12 communicates with a team member located in the
pit area 28, designated at 12A, while the driver of car 13
communicates with a team member 13A, also in the pit area 28.
[0014] In many race events, additional team members may be located
at other areas on the track 26 during the race. For instance, a
"spotter" 12B is also sometimes present during the race. The
spotter 12B watches the race and communicates with the driver of
car 12 and the team member 12A located in the pit area 28,
providing relevant information concerning the race. For example,
the spotter 12B informs the driver of car 12 when he has cleared
another car during the race and can safely pass in front the other
car. Likewise, a spotter 13B communicates with the driver of car 13
and the team member 13A in the pit area 28 similarly. As stated
above, each of the teams for the cars 12-18 have team members in
the pit area 28 and spotters communicating on separate assigned
channels.
[0015] FIG. 1 further illustrates a system 30 of the present
invention that allows a remote fan to selectively listen to
communications made by the team members of each team during the
race. The system 30 includes an antenna 32 for picking up all
communications made between the team members of each team during
the race. A radio receiver 34 is connected to the antenna 32 and
provides the communications to a computer 36 as audio signals 38.
The computer 36 is located at the race track 26 or remote
therefrom. However, whether located at the track 26 or remote
therefrom, the computer 36 is connected to a wide area network 40,
such as the Internet. A remote race fan uses a remote computer 42
connectable to the wide area network 40, and accesses the computer
36 (hereinafter "server") in order to receive information from the
server 36 indicative of the audio signals 38 and, thus, the
communications of the race event 10.
[0016] The server 36 and the remote computer 42 can be a personal
computer, laptop computer or other suitable computing device
connectable to the wide area network 40 using phone lines, cable
lines, satellite links, or other suitable communication means.
Generally, the remote computer 42 includes a display or monitor, an
input device, such as a keyboard or a mouse, and speaker, not
shown, but well known. The remote computer 42 further includes a
suitable microprocessor and support peripherals such as random
access memory (RAM), read only memory (ROM) and storage mediums
such as a hard disk, floppy disk/drive and/or optical disk/drive
communicating with each other over a system bus, again all well
known in the art. Exemplary embodiments of the present invention
described below include modules that can be implemented in
hardware, software or a combination of both hardware and
software.
[0017] In a preferred embodiment of the present invention, the
remote fan using the remote computer 42 can select any or all of
the team communications to listen to during the race. FIG. 2 is an
embodiment of a user interface 50 displayable on the monitor of the
remote computer 42. The user interface 50 comprises a list 52 of
each of the participants in the race. The list 52 includes unique
identifiers 54, such as the car number or the team name, for each
of the race participants. Using a suitable input device, such as a
mouse, the remote fan selects any or all of the participants to
listen to during the race using the unique identifiers 54.
[0018] In a first embodiment, selection of a race participant to
listen to can be identified by a "check" indicated at 56 for the
corresponding identifier 54. During the race, the communications
pertaining to only those selected teams in the list 52 would be
provided to the speaker of the remote computer 42 for listening by
the remote fan. Communications of teams not selected in the list 52
would be filtered by either the server 36 or the remote computer 42
(if all of the communications are passed through the wide area
network 40), and not provided to the speaker of the remote computer
42. In this manner, the remote fan can choose which participants to
listen to during the race, while ignoring or filtering out all
other communications. In a further embodiment, when there exists
any communication between the driver, the pit area or the spotter,
if present, of a selected participant in the list 52, the unique
identifier 54 corresponding to the communication is uniquely
identified in the list 52 so as to signify to the remote fan which
team is currently being heard through the speaker of the remote
computer 42. For instance, the unique identifier 54 can flash (as
illustrated for team number 64), be presented in bold face or
highlighted when communications are being heard over the speaker of
the remote computer 42. As stated above, typically each team is
assigned a unique channel or frequency, thus, identification of
each communication is relatively easy.
[0019] In a further embodiment, the remote fan can prioritize the
selected teams in the list 52. In the embodiment illustrated, a
priority number is indicated at 58 for each of the teams selected.
The highest priority team is herein indicated as "1", while the
lowest priority team is indicated as "3". By prioritizing, the
remote fan can be assured of hearing substantially all
communications from a particular team, while still being able to
hear most of the communications from other teams. For instance, if
communications are currently being heard from a priority "2" team
through the speaker of the remote computer 42 and communication
starts for a priority "1" team, the audio being played over the
speaker of the remote computer 42 will immediately switch to the
priority "1" team. Likewise, if a priority "3" team communication
is currently being heard and a priority "2" team communication
begins, the speaker of the remote computer 42 will then start
providing the priority "2" team communication. However, if during
the playing of a communication, a lower priority communication
begins, the lower priority communication will not be delivered by
the speaker of the remote computer 42 until the communication of
the higher priority communications suspends, which is typically
signified by silence for a given time period. In addition, if
during the communication of a priority team, another communication
of a team having the same priority begins, the communication of the
first playing team will not suspend until the communication is
over. At that time, the communication of the other team having the
same priority will then begin.
[0020] In a further embodiment, the list 52 can include other audio
sources such as TV commentary provided by a broadcaster televising
the race event 10. The list 52 can also include race or track
officials broadcasting on a radio frequency at the race event 10,
which is typically only heard by fans or participants at the race
event 10. Like the teams in the list 52, the TV commentary and
track channel can also be selected and/or prioritized in the manner
described above.
[0021] The TV commentary can be provided to the server 36 as
indicated at 67 or to the remote computer 42 as indicated at 69,
wherein the TV feed is separately provided to the remote computer
42 or the TV feed and the signals from the wide area network are
provided to a single assembly such as a settop box. Communications
pertaining to the track channel can be received by the antenna 32
or otherwise provided directly to the server 36.
[0022] In another further embodiment, the remote fan can also
select to hear the race leading participants regardless of whether
or not they have been selected in the list 52. A selection field is
indicated in the user interface 50 at 60. The selection field 60
includes a user selectable number of leading participants to listen
to at 62. The selectable number 62 is adjustable from zero to any
desired number of participants. A priority field 64 can also be
provided and functions similar to the priority indicators described
above. Thus, in the embodiment illustrated, if there exists a
communication from one of the first three race leaders, and that
team was not otherwise selected from the list 52, the communication
will be played over the speaker of the remote computer 42 and the
unique identifier 54 will be highlighted or otherwise identified to
the remote fan in the list 52.
[0023] Current race statistics identifying the position of each of
the race positions of the cars 12-18 can be provided as an input to
the server 36, as indicated at 65. Based on the race statistics,
the server 36 or the remote computer 42 can determine if a
communication from a particular team meets the criteria of field
60.
[0024] In yet a further embodiment, the user interface 50 allows
the remote fan to choose which team members of each team to listen
to if a communication exists. In the embodiment illustrated, upon
selection of the unique identifier 54 for a particular
participating team, a list 70 listing the team members
communicating on a team channel is provided. Typically, the team
members include a driver 72, a team member 74 located in the pit
area 28 and one or more spotters 76 also communicating on the team
channel. The list 70 also includes corresponding selectable
portions 78 for each of the team members 72, 74 and 76
communicating on the team channel. By using an input device, such
as a mouse, the remote fan can select which team members 72, 74 and
76 of each team to listen to while filtering out communications of
unwanted team members from that team. This feature is particularly
useful at some race events where communications from spotters occur
frequently; however, the information may not be particularly
interesting to the race fan. By allowing the remote fan to select
those communications of each team that he is interested in and
filter out communications from other team members, audio heard from
the race event 10 can be more enjoyable.
[0025] In the embodiment illustrated, the unique identifier 54
corresponds to the team member in the list 70 when that
communication is being heard over the speaker of the remote
computer 42. Thus, when the driver is being heard over the speaker,
his/her name will appear flashing, highlighted or in bold in the
list 52 as illustrated with respect to car number 64. Likewise,
when the team member in the pit area is being heard, a suitable
designation such as "PIT" will appear in conjunction with the car
number.
[0026] In a further embodiment, some or all team communications can
be stored for later playing. For instance, as explained above,
priorities can be set so that some team communications will be
heard over others. In the event, overlap exists in the
communications such that one team is not heard because another team
communication is being heard, the former communication audio
signals can be stored (either at the server or at the remote
listener's computer, discussed below) so that the listener can hear
the communication at a later time. Any overlapping communications
can be stored in this manner.
[0027] If desired, list 52 can include an identifier 53 that
indicates a stored communication of the corresponding team. The
identifier 53 can also indicate the team member speaking, for
example, "D" for driver, "P" for pit, etc. In FIG. 2, a table 55
can be accessed indicating all of the stored communications for
that team. The listener can then indicate which communication to
play from the table 55. The indications in the table 55 can also
include a time stamp or other signifier (e.g. a lap identifier for
that team or referenced to the race leader) to give the listener
information as to when the communication was made. The listener can
individually play each desired communication, or alternatively,
playback can begin with that communication and continue with other
communications from other members of that team within a selected
duration (from the first identified communication or from last
played communication), which can be adjusted if desired at 57.
[0028] In one mode of operation, the stored communication can be
played automatically when there are currently no other selected
teams communicating. In another mode of operation, the listener can
indicate playing when the listener desires, for instance, by
activating the identifier through an input device such as a mouse.
If during playback, a higher priority communication is received,
the higher priority communication can be played immediately or
stored automatically for later playback.
[0029] FIG. 3 illustrates communication between the team members of
the team comprising car 12, the team member 12A in the pit area 28
and the spotter 12B. As stated above, the communications of these
team members is herein considered a channel. To identify each of
the team members in order to allow filtering using the list 70,
suitable analog or digital identifiers are associated with the
communications from each of the team members. For instance, if the
team members communicate over a digital channel, a unique digital
tag can be associated with the driver of car 12, the team member
12A in the pit area 28 and the spotter 12B. Based on the digital
tag, the communication from that team member is played over the
speaker of the remote computer 42 if that particular member has
been selected in the list 70 of that team. In another embodiment,
each of the team members can transmit on their own unique carrier
frequency, but be able to receive communications from the other
team members which are also transmitting on unique frequencies. In
this manner, if a communication exists on a frequency corresponding
to a team member selected in the list 70, that communication would
be heard over the speaker of the remote computer 42. In this
embodiment, although unique frequencies have been assigned to each
of the team members, that set of unique frequencies is considered a
team channel.
[0030] FIGS. 1 and 4 illustrate other information that can be
transmitted to a remote fan using the system of the present
invention. FIG. 4 is a display or monitor 100 at the remote
computer 42. During the race, the monitor 100 displays identifiers
102 for each of the participants, for example, those selected in
the list 52. The identifiers 102 can be continuously provided on
the display 100 or selected and displayed at the control of the
remote fan. A major portion 104 of the monitor 100 can show the
picture currently being received from the television broadcaster.
As stated above, this signal can be provided to the server 36 as
indicated at 67 or to the remote computer 42 as indicated at 69. In
addition to the television broadcaster's view of the race event 10,
one or more of the race cars 12-18 can be equipped with cameras as
is commonly known in the art. Each of the signals from the cameras
in the race cars 12-18 can be provided to the server 36 as
indicated at 106. With the views of each of the cars 12-18 provided
to the server 36, the remote fan can select one or more views from
the car views 106 as he desires with the remote computer 42. In the
embodiment illustrated, a portion 108 of each identifier 102 is
identified as a "car view" for that particular car and is used to
select the car view for display on the monitor 100. The selected
car view then can be displayed in a portion 110 of the monitor 100
in conjunction with the view provided by the television
broadcaster. If desired, the car view can be expanded to cover all
of the monitor 100. In another embodiment, each of the car views
can be provided in a list, similar to the list 52 illustrated in
FIG. 2, and selected when desired by the remote fan.
[0031] In a further embodiment, the server 36 receives telemetry
signals from each of the cars 12-18 indicating, for example, the
speed of the car, the engine speed of the car, the current gear and
when brakes are applied. This information is provided to the remote
computer 42 and displayed on the monitor 100 such as indicated at
112. In the embodiment illustrated, the telemetry signals are
received by the radio receiver 34. The remote fan selects which
telemetry signals to display. In the embodiment illustrated, a
portion 114 is provided for each of the identifiers 102 to select
the corresponding telemetry signals of each car. If desired, a list
similar to the list 52 described above, can be separately provided
for the selection of telemetry signals.
[0032] In a further embodiment, team statistics can be separately
selected and displayed when desired. In the embodiment illustrated,
the statistics are selected through a portion 116 of each of the
identifiers 102. The team statistics can include, for example, the
participant's current position in the race, the top speed obtained
during the race, the average speed during the race, the average
speed for the last five laps, the pit times during the race and the
average time in seconds behind the leader. These unique statistics
for each of the teams are displayed on the monitor 100 when
selected by the user using the remote computer 42. Each of the team
statistics are provided to the server 36 as indicated at 65 and
updated as necessary during the race.
[0033] FIG. 5 illustrates an exemplary server 36 for transmission
of race information, discussed above, through the wide area network
40 to the remote computers 42. The server 36 includes a processing
module 120 that receives any or all of the audio signals 38 and
stores the signals if necessary, the telemetry signals, the race
statistics 65, the car views 106 and the television feed 67. The
processing module 120 processes the information for transmission to
the remote computers 42, which typically includes digitizing the
signals and forming the digitized data into data packets that are
sent through the wide area network 40 to the remote computers 42
through a transmission module 122. The use of transmitted data
packets, which can be sent individually, or grouped as files, to
provide substantially continuous viewing and/or listening from a
remote location over the Internet is well known. One manufacturer
using such technology includes RealNetworks, Inc. of Seattle,
Wash., which produce REALAUDIO and REALVIDEO. These systems allow a
user of a remote computer to select a particular "audio station" or
"video station" from a server across the Internet. A data stream is
then transmitted to the user whereat a receiving module provided on
the user's computer converts the data stream for display through
the monitor and/or output through the speaker.
[0034] In one embodiment of the present invention, the processing
module 120 processes the information into data packets that include
information for at least two different audio, video or telemetry
signals for different teams. Referring to FIG. 6, an exemplary data
packet 140 for audio signals is illustrated. It should be
understood that the embodiment shown is for illustrative purposes
only and that other data packets having alternative structures can
be used in the present invention.
[0035] The data packet 140 includes portions 142, 143, 144, 145,
146, 147 and 148 corresponding to each of the team channels for the
cars 12-18, respectively. In particular, the values contained in
portions 142-148 are indicative of communication between the team
members for each respective team. In the embodiment illustrated,
analog-to-digital converters 149 are provided to convert the audio
signals 38 to digital values, which are provided to the processing
module 120. Of course, if the audio signals are digital, the
analog-to-digital converters are not required. The processing
module 120 receives the digital values and forms data packets 140
that are transmitted to the remote computer 42 through the wide
area network 40. In the exemplary embodiment, the length of the
data packet 140 is a function of the number of team channels
present. Typically, the length of the data packet 140 will be
limited by the throughput of the connections forming the wide area
network 40. In some situations, it may be necessary to form the
different data packets for different sets of teams. The remote
computer user would then select which stream of data packets to
receive. This is represented in FIG. 5 as data lines 150A, 150B,
150C and 150D. For example, data packets for data line 150A can be
for a first set of five preselected team channels, whereas data
packets for data lines 150B and 150C can be for a second and third
set of preselected team channels. In contrast, data packets for
data line 150D can be for team channels dynamically selected. For
example, the team channels present in data line 150D can be the top
five cars leading the race, wherein the processing module 120 forms
the data packets for data line 150D from the race statistics 65.
Alternatively, the team channels present in data line 150D can be
chosen based on other criteria including requests made by the
remote computers 42.
[0036] In a further embodiment, the data packet 140 includes a
portion 160 having subportions 162, 163, 164, 165, 166, 167 and 168
corresponding to each of the portions 142-148. In particular, the
values present in subportions 162-168 are used to identify the
particular team member of each team that is talking during the
instant of time that the data packet 140 represents. As explained
above, a race team can include a driver, a pit member and a
spotter. The unique value is associated with each of these members
and used in the portions 162-168 to identify the team member that
is talking. In effect, the portions 162-168 comprise identifiers or
tags for each of the portions 142-148. In one exemplary embodiment,
one or two bytes can be used for each of the portions 142-148,
whereas one or two bytes can be used for the portion 150 wherein
two bits are associated with each portion 162-168.
[0037] In the data packet 140 described above, each team is
identified by its position in the data packet. It should be
understood that further information can be transmitted to the
remote computer 42 so that the remote computer 42 can properly
determine which teams comprise the data packet. Even in the case of
data line 150D, the server 36 can transmit information to the
remote computers 42 indicating which teams currently comprise the
corresponding data packets. In this manner, unique identifiers need
not be associated with each team or team member as data is
transmitted, which reduces the amount of data transmitted. However,
in an alternative embodiment of the present invention, identifiers
can be associated with each data packet identifying which teams
and/or team members are associated with each corresponding data
packet. This allows the data packet to only contain teams currently
communicating at any given instant. Accordingly, the data packets
can be of varying length. Although described above with respect to
team audio signals, it should be understood other audio signals
such as the television feed 67 can be included. In addition,
similar data packets can be formed for video and telemetry
information, or alternatively, integrated into the data packet with
the audio signals. Compression techniques can be used to minimize
the length of the data packet, if desired.
[0038] In yet a further alternative embodiment, each data packet
can be for only one team channel or team member. Identifiers can be
included to identify which team or team member the data packet is
associated with. If desired, any of the above-described data
packets can be transmitted using multiplex transmission
communication techniques incorporating, but not limited to, time
division, frequency division or phase division.
[0039] Referring to FIG. 7, the remote computer 42 includes a
receiver module 170. The receiver module 170 receives the data
packets and processes the information contained therein. The
receiver module 170 receives the data packets according to any of
the transmission techniques described above. In one embodiment, the
receiver module 170 functions as a filter and only allows those
teams that have been selected (check 56) to be heard over a speaker
174. The selections can be stored at 175. In a further embodiment,
the selections 175 can include priority and team member selections.
The receiver module 170 processes each data packet according to the
assigned priority and team members to be heard, as discussed above.
The signals can be stored for later playback when desired.
[0040] Race statistics 65 are periodically transmitted from the
server 36 to the remote computer 42 and stored at 176. The race
statistics 176 are accessed by the user for display on a monitor
177 as desired and used to assign priority in accordance with
values selected at 62 and 64 in FIG. 2.
[0041] In another embodiment, at least some of the filtering is
performed by the server 36. For example, data line 150D represents
transmission of audio signals for a selected number of teams
leading the race. Thus, although the server 36 receives all
communications from the receiver 32, only those communications
pertaining to the selected number of leading teams are transmitted
to the receiver module 170. In this embodiment, the receiver module
170 can pass all communications to the speaker 174, or, if desired,
further filter the communications pursuant to stored
preferences.
[0042] In one preferred method of operation, the receiver module
170 can be used for a plurality of race events. In particular,
information regarding each of the teams for use in the user
interface 50 and display of FIG. 4 is provided to the remote
computer 42 over the wide area network 40, for example, from the
server 36 or another remote computer, and stored at 178. The remote
computer user then selects those audio channels of interest,
assigning priority levels and choosing which team members will be
heard, if desired. Data packets and race statistics are received
periodically. As data packets are received and processed, the user
interface 50 or display of FIG. 4 is updated to indicate which
audio channel and/or team member is being heard over the speaker
174.
[0043] If desired, more than one speaker 174 can be used for
playing audio signals. FIG. 7 illustrates a second speaker 180. In
one embodiment, the speaker 180 is used for playing audio signals
from a first set of one or more teams, while the speaker 174 is
used for playing audio signals from a second set of one or more
teams. Upon receipt of the data representative of the audio
signals, the receiver module 170 filters the signals to each of the
speakers 174 and 180. In another embodiment, the speakers 174 and
180 can be used when assigned priority values would cutoff an audio
signal being played through the speakers. For instance, if
communications are currently being heard from a priority "2" team
through the speaker 174 of the remote computer 42 and communication
starts for a priority "1" team, the audio being played over the
speaker 174 can continue, while the communication from the priority
"1" team will be played over the speaker 180. Although described
with respect to the use of two speakers, it should be understood
that three, four or more speaker systems can be used similarly.
[0044] Although the present invention has been described with
reference to preferred embodiments, workers skilled in the art will
recognize that changes may be made in form and detail without
departing from the spirit and scope of the invention.
* * * * *