U.S. patent application number 14/940779 was filed with the patent office on 2016-03-10 for audio processing algorithms.
The applicant listed for this patent is Sonos, Inc.. Invention is credited to Timothy W. Sheen.
Application Number | 20160070530 14/940779 |
Document ID | / |
Family ID | 55067619 |
Filed Date | 2016-03-10 |
United States Patent
Application |
20160070530 |
Kind Code |
A1 |
Sheen; Timothy W. |
March 10, 2016 |
Audio Processing Algorithms
Abstract
An example implementation involves a computing device
transmitting, via a local area network, a command that instructs a
playback device to play a particular audio signal. The example
implementation also involves the computing device receiving data
indicating a detected audio signal corresponding to playback of the
particular audio signal by the playback device, where the detected
audio signal includes a portion of the particular audio signal. The
implementation further involves the computing device obtaining data
indicating a predetermined audio characteristic and determining an
audio processing algorithm based on the detected audio signal and
the predetermined audio characteristic. The example implementation
involves causing the playback device to apply the determined audio
processing algorithm when playing audio via at least one
speaker.
Inventors: |
Sheen; Timothy W.;
(Brighton, MA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Sonos, Inc. |
Santa Barbara |
CA |
US |
|
|
Family ID: |
55067619 |
Appl. No.: |
14/940779 |
Filed: |
November 13, 2015 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
14481505 |
Sep 9, 2014 |
|
|
|
14940779 |
|
|
|
|
Current U.S.
Class: |
381/80 |
Current CPC
Class: |
H04R 2227/005 20130101;
H04R 27/00 20130101; H04S 7/307 20130101; G06F 3/165 20130101 |
International
Class: |
G06F 3/16 20060101
G06F003/16; H04R 27/00 20060101 H04R027/00 |
Claims
1. A non-transitory computer-readable medium having stored thereon
instructions executable by a computing device to perform functions
comprising: transmitting, via a local area network, a command that
instructs a playback device to play a particular audio signal;
receiving data indicating a detected audio signal corresponding to
playback of the particular audio signal by the playback device,
wherein the detected audio signal comprises a portion of the
particular audio signal; obtaining data indicating a predetermined
audio characteristic; determining an audio processing algorithm
based on the detected audio signal and the predetermined audio
characteristic; and causing the playback device to apply the
determined audio processing algorithm when playing audio via at
least one speaker.
2. The non-transitory computer-readable medium of claim 1, wherein
causing the playback device to apply the determined audio
processing algorithm comprises transmitting, to the playback
device, data indicating one or more audio processing algorithm
parameters associated with the determined audio processing
algorithm.
3. The non-transitory computer-readable medium of claim 1, wherein
the functions further comprise: causing the data indicating the
determined audio processing algorithm to be stored in data
storage.
4. The non-transitory computer-readable medium of claim 1, wherein
determining the audio processing algorithm based on the detected
audio signal and the predetermined audio signal comprises:
determining the audio processing algorithm further based on an
acoustic characteristic of the playback device in a playback zone
that comprises the playback device.
5. The non-transitory computer-readable medium of claim 4, wherein
the functions further comprise: obtaining the acoustic
characteristic of the playback device in the playback zone.
6. The non-transitory computer-readable medium of claim 1, wherein
obtaining the data indicating the predetermined audio
characteristic comprises receiving, via a network, the
predetermined audio signal from a device in communication with the
computing device.
7. The non-transitory computer-readable medium of claim 1, wherein
obtaining the data indicating the predetermined audio
characteristic comprises: transmitting, via a network to a device
in communication with the computing device, a message indicating a
configuration of the playback device; and receiving, via the
network from the device in communication with the computing device,
the predetermined audio characteristic, wherein the predetermined
audio characteristic corresponds to the configuration of the
playback device.
8. The non-transitory computer-readable medium of claim 1, wherein
obtaining the data indicating the predetermined audio
characteristic comprises obtaining the data indicating the
predetermined audio characteristic from a local memory storage of
the computing device.
9. The non-transitory computer-readable medium of claim 1, wherein
applying the determined audio processing algorithm by the playback
device when playing audio produces an audio signal having a
particular audio characteristic that is substantially the same as
the predetermined audio characteristic.
10. The non-transitory computer-readable medium of claim 1, wherein
the data indicating the predetermined audio characteristic
indicates one or more audio signal parameters for a predetermined
audio signal having the predetermined audio characteristic.
11. A computing device comprising: a processor; and memory having
stored thereon instructions executable by the processor to cause
the computing device to perform functions comprising: transmitting,
via a local area network, a command that instructs a playback
device to play a particular audio signal; receiving data indicating
a detected audio signal corresponding to the playback of the
particular audio signal by the playback device, wherein the
detected audio signal comprises a portion of the particular audio
signal; obtaining data indicating a predetermined audio
characteristic; determining an audio processing algorithm based on
the detected audio signal and the predetermined audio
characteristic; and causing the playback device to apply the
determined audio processing algorithm when playing audio via one or
more speakers.
12. The computing device of claim 11, wherein causing the playback
device to apply the determined audio processing algorithm comprises
transmitting, to the playback device, data indicating one or more
audio processing algorithm parameters associated with the
determined audio processing algorithm.
13. The computing device of claim 11, wherein the functions further
comprise: causing the data indicating the determined audio
processing algorithm to be stored in data storage.
14. The computing device of claim 11, wherein determining the audio
processing algorithm based on the detected audio signal and the
predetermined audio signal comprises: determining the audio
processing algorithm further based on an acoustic characteristic of
the playback device in a playback zone that comprises the playback
device.
15. The computing device of claim 11, wherein obtaining the data
indicating the predetermined audio characteristic comprises
receiving, via a network, the predetermined audio signal from a
device in communication with the computing device.
16. A method comprising: transmitting, via a computing device over
a local area network, a command to cause a playback device to play
a particular audio signal; receiving, via the computing device,
data indicating a detected audio signal corresponding to the
playback of the particular audio signal by the playback device,
wherein the detected audio signal comprises a portion of the
particular audio signal; obtaining, via the computing device, data
indicating a predetermined audio characteristic; determining, by
the computing device, an audio processing algorithm based on the
detected audio signal and the predetermined audio characteristic;
and causing, via the computing device, the playback device to apply
the determined audio processing algorithm when playing audio via
one or more speakers.
17. The method of claim 16, wherein obtaining the data indicating
the predetermined audio characteristic comprises: transmitting, via
the computing device over a network to a device in communication
with the computing device, a message indicating a configuration of
the playback device; and receiving, via the computing device over
the network from the device in communication with the computing
device, the predetermined audio characteristic, wherein the
predetermined audio characteristic corresponds to the configuration
of the playback device.
18. The method of claim 16, wherein obtaining the data indicating
the predetermined audio characteristic comprises obtaining the data
indicating the predetermined audio characteristic from a local
memory storage of the computing device.
19. The method of claim 16, wherein applying the determined audio
processing algorithm by the playback device when playing audio
produces an audio signal having a particular audio characteristic
that is substantially the same as the predetermined audio
characteristic.
20. The method of claim 16, wherein the data indicating the
predetermined audio characteristic indicates one or more audio
signal parameters for a predetermined audio signal having the
predetermined audio characteristic.
Description
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority under 35 U.S.C. .sctn.120
to, and is a continuation of, U.S. non-provisional patent
application Ser. No 14/481,505, entitled "Audio Processing
Algorithms," which is incorporated herein by reference.
FIELD OF THE DISCLOSURE
[0002] The disclosure is related to consumer goods and, more
particularly, to methods, systems, products, features, services,
and other elements directed to media playback or some aspect
thereof.
BACKGROUND
[0003] Options for accessing and listening to digital audio in an
out-loud setting were limited until in 2003, when SONOS, Inc. filed
for one of its first patent applications, entitled "Method for
Synchronizing Audio Playback between Multiple Networked Devices,"
and began offering a media playback system for sale in 2005. The
Sonos Wireless HiFi System enables people to experience music from
a plethora of sources via one or more networked playback devices.
Through a software control application installed on a smartphone,
tablet, or computer, one can play what he or she wants in any room
that has a networked playback device. Additionally, using the
controller, for example, different songs can be streamed to each
room with a playback device, rooms can be grouped together for
synchronous playback, or the same song can be heard in all rooms
synchronously.
[0004] Given the ever growing interest in digital media, there
continues to be a need to develop consumer-accessible technologies
to further enhance the listening experience.
BRIEF DESCRIPTION OF THE DRAWINGS
[0005] Features, aspects, and advantages of the presently disclosed
technology may be better understood with regard to the following
description, appended claims, and accompanying drawings where:
[0006] FIG. 1 shows an example media playback system configuration
in which certain embodiments may be practiced;
[0007] FIG. 2 shows a functional block diagram of an example
playback device;
[0008] FIG. 3 shows a functional block diagram of an example
control device;
[0009] FIG. 4 shows an example controller interface;
[0010] FIG. 5 shows an example flow diagram of a first method for
maintaining a database of audio processing algorithms;
[0011] FIG. 6A shows an example portion of a first database of
audio processing algorithms;
[0012] FIG. 6B shows an example portion of a second database of
audio processing algorithms;
[0013] FIG. 7 show an example flow diagram of a second method for
maintaining a database of audio processing algorithms;
[0014] FIG. 8 shows an example playback zone within which a
playback device may be calibrated;
[0015] FIG. 9 shows an example flow diagram of a first method for
determining an audio processing algorithm based on one or more
playback zone characteristics;
[0016] FIG. 10 shows an example flow diagram of a second method for
determining an audio processing algorithm based on one or more
playback zone characteristics; and
[0017] FIG. 11 shows an example flow diagram for identifying an
audio processing algorithm from a database of audio processing
algorithms.
[0018] The drawings are for the purpose of illustrating example
embodiments, but it is understood that the inventions are not
limited to the arrangements and instrumentality shown in the
drawings.
DETAILED DESCRIPTION
I. Overview
[0019] When a playback device plays audio content in a playback
zone, a quality of the playback may depend on an acoustic
characteristic of the playback zone. In discussions herein, the
playback zone may include one or more playback devices or groups of
playback devices. The acoustic characteristic of the playback zone
may depend on a dimension of the playback zone, types of furniture
in the playback zone, and an arrangement of the furniture in the
playback zone, among other factors. As such, different playback
zones may have different acoustic characteristics. Because a given
model of the playback device may be used in a variety of different
playback zones with different acoustic characteristics, a single
audio processing algorithm may not provide a consistent quality of
audio playback by the playback device in each of the different
playback zones.
[0020] Examples discussed herein relate to determining an audio
processing algorithm for the playback device to apply based on
acoustic characteristics of a playback zone the playback device is
in. Application of the determined audio processing algorithm by the
playback device when playing audio content in the playback zone may
cause audio content rendered by the playback device in the playback
zone to assume a predetermined audio characteristic, at least to
some extent. In one case, application of the audio processing
algorithm may alter audio amplifications at one or more audio
frequencies of the audio content. Other examples are also
possible.
[0021] In one example, the database of audio processing algorithms
may be maintained, and an audio processing algorithm in the
database may be identified based on one or more characteristics of
the playback zone. The one or more characteristics of the playback
zone may include the acoustic characteristic of the playback zone,
and/or one or more of a dimension of the playback zone, a flooring
and/or wall material of the playback zone, and a number and/or
types of furniture in the playback zone, among other
possibilities.
[0022] Maintaining the database of audio processing algorithms may
involve determining at least one audio processing algorithm that
corresponds to the one or more characteristics of the playback
zone, and adding the determined audio processing algorithm to the
database. In one example, the database may be stored on one or more
devices maintaining the database, or one or more other devices. In
discussions herein, unless otherwise noted, the functions for
maintaining the database may be performed by one or more computing
devices (i.e. servers), one or more playback devices, or one or
more controller devices, among other possibilities. However, for
simplicity, the one or more devices performing the functions may be
generally referred to as a computing device.
[0023] In one example, determining such an audio processing
algorithm may involve the computing device determining an acoustic
characteristic of a playback zone. In one case, the playback zone
may be a model room used to simulate a playback zone within which
the playback device may play audio content. In such a case, one or
more physical characteristics of the model room (i.e. dimensions,
and floor and wall materials, etc.) may be pre-determined. In
another case, the playback zone may be a room in a home of a user
of the playback device. In such a case, the physical
characteristics of the playback zone may be provided by the user,
or may be otherwise unknown.
[0024] In one example, the computing device may cause the playback
device in the playback zone to play an audio signal. In one case,
the played audio signal may include audio content with frequencies
covering substantially an entire frequency range renderable by the
playback device. The playback device may subsequently detect an
audio signal using a microphone of the playback device. The
microphone of the playback devices may be a built-in microphone of
the playback device. In one case, the detected audio signal may
include a portion corresponding to the played audio signal. For
instance, the detected audio signal may include a component of the
played audio signal reflected within the playback zone. The
computing device may receive the detected audio signal from the
playback device, and determine an acoustic response of the playback
zone based on the detected audio signal.
[0025] The computing device may then determine the acoustic
characteristic of the playback zone by removing an acoustic
characteristic of the playback device from the acoustic response of
the playback zone. The acoustic characteristic of the playback
device may be an acoustic characteristic corresponding to a model
of the playback device. In one case, the acoustic characteristic
corresponding to the model of the playback device may be determined
based on audio signals played and detected by a representative
playback device of the model in an anechoic chamber.
[0026] The computing device may then determine a corresponding
audio processing algorithm based on the determined acoustic
characteristics of the playback zone and a predetermined audio
characteristic. The predetermined audio characteristics may involve
a particular frequency equalization that is considered
good-sounding. The corresponding audio processing algorithm may be
determined such that an application of the corresponding audio
processing algorithm by the playback device when playing audio
content in the playback zone causes audio content rendered by the
playback device in the playback zone to assume the predetermined
audio characteristic, at least to some extent. For instance, if the
acoustic characteristic of the playback zone is one in which a
particular audio frequency is more attenuated than other
frequencies, the corresponding audio processing algorithm may
involve an increased amplification of the particular audio
frequency. Other examples are also possible.
[0027] An association between the determined audio processing
algorithm and the acoustic characteristic of the playback zone may
then be stored as an entry in a database. In some cases, an
association between the audio processing algorithm and one or more
other characteristics of the playback zone may additionally or
alternatively be stored in the database. For instance, if the
playback zone is of a particular dimension, an association between
the audio processing algorithm and the particular room dimension
may be stored in the database. Other examples are also
possible.
[0028] In one example, the database may be accessed by a computing
device to identify an audio processing algorithm for a playback
device to apply in a playback zone. In one example, the computing
device accessing the database and identifying the audio processing
algorithm may be the same computing device maintaining the
database, as described above. In another example, the computing
device may be a different computing device.
[0029] In some cases, accessing the database to identify an audio
processing algorithm for the playback device to apply in the
playback zone may be a part of a calibration of the playback
device. Such a calibration of the playback device may be initiated
by the playback device itself, by a server in communication with
the playback device, or by a controller device. In one case, the
calibration may be initiated because the playback device is new and
the calibration is part of an initial setup of the playback device.
In another case, the playback device may have been repositioned,
either within the same playback zone or from one playback zone to
another. In a further case, the calibration may be initiated by a
user of the playback device, such as via the controller device.
[0030] In one example, calibration of the playback device may
involve the computing device prompting the user of the playback
device to indicate one or more characteristics of the playback
zone, such as an approximate dimension of the playback zone,
flooring or wall material, and amount of furniture, among other
possibilities. The computing device may prompt the user via a user
interface on a controller device. Based on the one or more
characteristics of the playback zone as provided by the user, an
audio processing algorithm corresponding to the one or more
characteristics of the playback zone may be identified in the
database, and the playback device may accordingly apply the
identified audio processing algorithm when playing audio content in
the playback zone.
[0031] In another example, calibration of the playback device may
involve determining an acoustic characteristic of the playback zone
and identifying a corresponding audio processing algorithm based on
the acoustic characteristic of the playback zone. Determination of
the acoustic characteristic of the playback zone may be similar to
that described above. For instance, the playback device in the
playback zone for which the playback device is being calibrated for
may play a first audio signal and subsequently detect using a
microphone of the playback device, a second audio signal. The
second audio signal may then be based upon to determine an acoustic
characteristic of the playback zone. Based on the determined
acoustic characteristic, a corresponding audio processing algorithm
may be identified in the database, and the playback device may
accordingly apply the identified audio processing algorithm when
playing audio content in the playback zone. As indicated above,
application of the corresponding audio processing algorithm by the
playback device when playing audio content in the playback zone may
cause audio content rendered by the playback device in the playback
zone to assume the predetermined audio characteristic, at least to
some extent.
[0032] While discussions of the calibration of the playback device
discussed above generally involve the database of audio processing
algorithms, one having ordinary skill in the art will appreciate
that the computing device may determine an audio processing
algorithm for a playback zone without accessing the database. For
instance, instead of identifying a corresponding audio processing
algorithm in the database, the computing device may determine the
audio processing algorithm by calculating the audio processing
algorithm based on the acoustic characteristic of the playback zone
(from the detected audio signal) and the predetermined audio
characteristic, similar to that described above in connection to
maintenance of and generation of audio processing algorithm entries
for the database. Other examples are also possible.
[0033] In one case, the playback device to be calibrated may be one
of a plurality of playback devices configured to synchronously play
audio content in the playback zone. In such a case, determination
of the acoustic characteristic of a playback zone may also involve
audio signals played by other playback devices in the playback
zone. In one example, during the determination of the audio
processing algorithm, each of the plurality of playback devices in
the playback zone may play audio signals at the same time such that
the audio signal detected by the microphone of the playback device
may include a portion corresponding to the audio signal played by
the playback device, as well as portions of audio signals played by
the other playback devices in the playback zone. An acoustic
response of the playback zone may be determined based on the
detected audio signal, and an acoustic characteristic of the
playback zone, including the other playback devices, may be
determined by removing an acoustic characteristic of the playback
device being calibrated from the acoustic response of the playback
zone. An audio processing algorithm may then be calculated or
identified in the database based on the acoustic characteristic of
the playback zone and applied by the playback device.
[0034] In another case, two or more playback devices in the
plurality of playback devices in the playback zone may each have a
respective built-in microphone, and may each be individually
calibrated according to the descriptions above. In one instance,
the acoustic characteristic of the playback zone may be determined
based on a collection of audio signals detected by microphones of
each of the two or more playback devices, and an audio processing
algorithm corresponding to the acoustic characteristic may be
identified for each of the two or more playback devices. Other
examples are also possible.
[0035] As indicated above, the present discussions involve
determining an audio processing algorithm for the playback device
to apply based on acoustic characteristics of a particular playback
zone the playback device is in. In one aspect, a computing device
is provided. The computing device includes a processor, and memory
having stored thereon instructions executable by the processor to
cause the computing device to perform functions. The functions
include causing a playback device in a playback zone to play a
first audio signal, and receiving from the playback device, data
indicating a second audio signal detected by a microphone of the
playback device. The second audio signal includes a portion
corresponding to the first audio signal. The functions further
include based on the second audio signal and an acoustic
characteristic of the playback device, determining an audio
processing algorithm, and transmitting data indicating the
determined audio processing algorithm to the playback device.
[0036] In another aspect, a computing device is provided. The
computing device includes a processor, and memory having stored
thereon instructions executable by the processor to cause the
computing device to perform functions. The functions include
causing a first playback device to play a first audio signal in a
playback zone, causing a second playback device to play a second
audio signal in the playback zone, and receiving from the first
playback device, data indicating a third audio signal detected by a
microphone of the first playback device. The third audio signal
includes (i) a portion corresponding to the first audio signal, and
(ii) a portion corresponding to the second audio signal played by a
second playback device. The functions also include based on the
third audio signal and an acoustic characteristic of the first
playback device, determining an audio processing algorithm, and
transmitting data indicating the determined audio processing
algorithm to the first playback device.
[0037] In another aspect, a playback device is provided. The
playback device includes a processor, a microphone, and memory
having stored thereon instructions executable by the processor to
cause the playback device to perform functions. The functions
include while in a playback zone, playing a first audio signal, and
detecting by the microphone, a second audio signal. The second
audio signal includes a portion corresponding to the first audio
signal. The functions also include based on the second audio signal
and an acoustic characteristic of the playback device, determining
an audio processing algorithm, and applying the determined audio
processing algorithm to audio data corresponding to a media item
when playing the media item in the playback zone.
[0038] In another aspect, a computing device is provided. The
computing device includes a processor, and memory having stored
thereon instructions executable by the processor to cause the
computing device to perform functions. The functions include
causing a playback device in a playback zone to play a first audio
signal, and receiving data indicating a second audio signal
detected by a microphone of the playback device. The second audio
signal includes a portion corresponding to the first audio signal
played by the playback device. The functions also include based on
the second audio signal and a characteristic of the playback
device, determining an acoustic characteristic of the playback
zone, based on the acoustic characteristic of the playback zone,
determining an audio processing algorithm, and causing to be stored
in a database, an association between the audio processing
algorithm and the acoustic characteristic of the playback zone.
[0039] In another aspect, a computing device is provided. The
computing device includes a processor, and memory having stored
thereon instructions executable by the processor to cause the
computing device to perform functions. The functions include
causing a playback device in a playback zone to play a first audio
signal, and receiving (i) data indicating one or more
characteristics of a playback zone, and (ii) data indicating a
second audio signal detected by a microphone of the playback
device. The second audio signal includes a portion corresponding to
the audio signal played by the playback device. The functions also
include based on the second audio signal and a characteristic of
the playback device, determining an audio processing algorithm, and
cause to be stored in a database, an association between the
determined audio processing algorithm and at least one of the one
or more characteristics of the playback zone.
[0040] In another aspect, a computing device is provided. The
computing device includes a processor, and memory having stored
thereon instructions executable by the processor to cause the
playback device to perform functions. The functions include
maintaining a database of (i) a plurality of audio processing
algorithms and (ii) a plurality of playback zone characteristics.
Each audio processing algorithm of the plurality of audio
processing algorithms corresponds to at least one playback zone
characteristic of the plurality of playback zone characteristics.
The functions also include receiving data indicating one or more
characteristics of a playback zone, based on the data, identifying
in the database, an audio processing algorithm, and transmitting
data indicating the identified audio processing algorithm.
[0041] While some examples described herein may refer to functions
performed by given actors such as "users" and/or other entities, it
should be understood that this is for purposes of explanation only.
The claims should not be interpreted to require action by any such
example actor unless explicitly required by the language of the
claims themselves. It will be understood by one of ordinary skill
in the art that this disclosure includes numerous other
embodiments.
II. Example Operating Environment
[0042] FIG. 1 shows an example configuration of a media playback
system 100 in which one or more embodiments disclosed herein may be
practiced or implemented. The media playback system 100 as shown is
associated with an example home environment having several rooms
and spaces, such as for example, a master bedroom, an office, a
dining room, and a living room. As shown in the example of FIG. 1,
the media playback system 100 includes playback devices 102-124,
control devices 126 and 128, and a wired or wireless network router
130.
[0043] Further discussions relating to the different components of
the example media playback system 100 and how the different
components may interact to provide a user with a media experience
may be found in the following sections. While discussions herein
may generally refer to the example media playback system 100,
technologies described herein are not limited to applications
within, among other things, the home environment as shown in FIG.
1. For instance, the technologies described herein may be useful in
environments where multi-zone audio may be desired, such as, for
example, a commercial setting like a restaurant, mall or airport, a
vehicle like a sports utility vehicle (SUV), bus or car, a ship or
boat, an airplane, and so on.
a. Example Playback Devices
[0044] FIG. 2 shows a functional block diagram of an example
playback device 200 that may be configured to be one or more of the
playback devices 102-124 of the media playback system 100 of FIG.
1. The playback device 200 may include a processor 202, software
components 204, memory 206, audio processing components 208, audio
amplifier(s) 210, speaker(s) 212, microphone(s) 220, and a network
interface 214 including wireless interface(s) 216 and wired
interface(s) 218. In one case, the playback device 200 may not
include the speaker(s) 212, but rather a speaker interface for
connecting the playback device 200 to external speakers. In another
case, the playback device 200 may include neither the speaker(s)
212 nor the audio amplifier(s) 210, but rather an audio interface
for connecting the playback device 200 to an external audio
amplifier or audio-visual receiver.
[0045] In one example, the processor 202 may be a clock-driven
computing component configured to process input data according to
instructions stored in the memory 206. The memory 206 may be a
tangible computer-readable medium configured to store instructions
executable by the processor 202. For instance, the memory 206 may
be data storage that can be loaded with one or more of the software
components 204 executable by the processor 202 to achieve certain
functions. In one example, the functions may involve the playback
device 200 retrieving audio data from an audio source or another
playback device. In another example, the functions may involve the
playback device 200 sending audio data to another device or
playback device on a network. In yet another example, the functions
may involve pairing of the playback device 200 with one or more
playback devices to create a multi-channel audio environment.
[0046] Certain functions may involve the playback device 200
synchronizing playback of audio content with one or more other
playback devices. During synchronous playback, a listener will
preferably not be able to perceive time-delay differences between
playback of the audio content by the playback device 200 and the
one or more other playback devices. U.S. Pat. No. 8,234,395
entitled, "System and method for synchronizing operations among a
plurality of independently clocked digital data processing
devices," which is hereby incorporated by reference, provides in
more detail some examples for audio playback synchronization among
playback devices.
[0047] The memory 206 may further be configured to store data
associated with the playback device 200, such as one or more zones
and/or zone groups the playback device 200 is a part of, audio
sources accessible by the playback device 200, or a playback queue
that the playback device 200 (or some other playback device) may be
associated with. The data may be stored as one or more state
variables that are periodically updated and used to describe the
state of the playback device 200. The memory 206 may also include
the data associated with the state of the other devices of the
media system, and shared from time to time among the devices so
that one or more of the devices have the most recent data
associated with the system. Other embodiments are also
possible.
[0048] The audio processing components 208 may include one or more
of digital-to-analog converters (DAC), analog-to-digital converters
(ADC), audio preprocessing components, audio enhancement
components, and a digital signal processor (DSP), among others. In
one embodiment, one or more of the audio processing components 208
may be a subcomponent of the processor 202. In one example, audio
content may be processed and/or intentionally altered by the audio
processing components 208 to produce audio signals. The produced
audio signals may then be provided to the audio amplifier(s) 210
for amplification and playback through speaker(s) 212.
Particularly, the audio amplifier(s) 210 may include devices
configured to amplify audio signals to a level for driving one or
more of the speakers 212. The speaker(s) 212 may include an
individual transducer (e.g., a "driver") or a complete speaker
system involving an enclosure with one or more drivers. A
particular driver of the speaker(s) 212 may include, for example, a
subwoofer (e.g., for low frequencies), a mid-range driver (e.g.,
for middle frequencies), and/or a tweeter (e.g., for high
frequencies). In some cases, each transducer in the one or more
speakers 212 may be driven by an individual corresponding audio
amplifier of the audio amplifier(s) 210. In addition to producing
analog signals for playback by the playback device 200, the audio
processing components 208 may be configured to process audio
content to be sent to one or more other playback devices for
playback.
[0049] Audio content to be processed and/or played back by the
playback device 200 may be received from an external source, such
as via an audio line-in input connection (e.g., an auto-detecting
3.5 mm audio line-in connection) or the network interface 214.
[0050] The microphone(s) 220 may include an audio sensor configured
to convert detected sounds into electrical signals. The electrical
signal may be processed by the audio processing components 208
and/or the processor 202. The microphone(s) 220 may be positioned
in one or more orientations at one or more locations on the
playback device 200. The microphone(s) 220 may be configured to
detect sound within one or more frequency ranges. In one case, one
or more of the microphone(s) 220 may be configured to detect sound
within a frequency range of audio that the playback device 200 is
capable or rendering. In another case, one or more of the
microphone(s) 220 may be configured to detect sound within a
frequency range audible to humans. Other examples are also
possible.
[0051] The network interface 214 may be configured to facilitate a
data flow between the playback device 200 and one or more other
devices on a data network. As such, the playback device 200 may be
configured to receive audio content over the data network from one
or more other playback devices in communication with the playback
device 200, network devices within a local area network, or audio
content sources over a wide area network such as the Internet. In
one example, the audio content and other signals transmitted and
received by the playback device 200 may be transmitted in the form
of digital packet data containing an Internet Protocol (IP)-based
source address and IP-based destination addresses. In such a case,
the network interface 214 may be configured to parse the digital
packet data such that the data destined for the playback device 200
is properly received and processed by the playback device 200.
[0052] As shown, the network interface 214 may include wireless
interface(s) 216 and wired interface(s) 218. The wireless
interface(s) 216 may provide network interface functions for the
playback device 200 to wirelessly communicate with other devices
(e.g., other playback device(s), speaker(s), receiver(s), network
device(s), control device(s) within a data network the playback
device 200 is associated with) in accordance with a communication
protocol (e.g., any wireless standard including IEEE 802.11a,
802.11b, 802.11g, 802.11n, 802.11ac, 802.15, 4G mobile
communication standard, and so on). The wired interface(s) 218 may
provide network interface functions for the playback device 200 to
communicate over a wired connection with other devices in
accordance with a communication protocol (e.g., IEEE 802.3). While
the network interface 214 shown in FIG. 2 includes both wireless
interface(s) 216 and wired interface(s) 218, the network interface
214 may in some embodiments include only wireless interface(s) or
only wired interface(s).
[0053] In one example, the playback device 200 and one other
playback device may be paired to play two separate audio components
of audio content. For instance, playback device 200 may be
configured to play a left channel audio component, while the other
playback device may be configured to play a right channel audio
component, thereby producing or enhancing a stereo effect of the
audio content. The paired playback devices (also referred to as
"bonded playback devices") may further play audio content in
synchrony with other playback devices.
[0054] In another example, the playback device 200 may be sonically
consolidated with one or more other playback devices to form a
single, consolidated playback device. A consolidated playback
device may be configured to process and reproduce sound differently
than an unconsolidated playback device or playback devices that are
paired, because a consolidated playback device may have additional
speaker drivers through which audio content may be rendered. For
instance, if the playback device 200 is a playback device designed
to render low frequency range audio content (i.e. a subwoofer), the
playback device 200 may be consolidated with a playback device
designed to render full frequency range audio content. In such a
case, the full frequency range playback device, when consolidated
with the low frequency playback device 200, may be configured to
render only the mid and high frequency components of audio content,
while the low frequency range playback device 200 renders the low
frequency component of the audio content. The consolidated playback
device may further be paired with a single playback device or yet
another consolidated playback device.
[0055] By way of illustration, SONOS, Inc. presently offers (or has
offered) for sale certain playback devices including a "PLAY:1,"
"PLAY:3," "PLAY:5," "PLAYBAR," "CONNECT:AMP," "CONNECT," and "SUB."
Any other past, present, and/or future playback devices may
additionally or alternatively be used to implement the playback
devices of example embodiments disclosed herein. Additionally, it
is understood that a playback device is not limited to the example
illustrated in FIG. 2 or to the SONOS product offerings. For
example, a playback device may include a wired or wireless
headphone. In another example, a playback device may include or
interact with a docking station for personal mobile media playback
devices. In yet another example, a playback device may be integral
to another device or component such as a television, a lighting
fixture, or some other device for indoor or outdoor use.
b. Example Playback Zone Configurations
[0056] Referring back to the media playback system 100 of FIG. 1,
the environment may have one or more playback zones, each with one
or more playback devices. The media playback system 100 may be
established with one or more playback zones, after which one or
more zones may be added, or removed to arrive at the example
configuration shown in FIG. 1. Each zone may be given a name
according to a different room or space such as an office, bathroom,
master bedroom, bedroom, kitchen, dining room, living room, and/or
balcony. In one case, a single playback zone may include multiple
rooms or spaces. In another case, a single room or space may
include multiple playback zones.
[0057] As shown in FIG. 1, the balcony, dining room, kitchen,
bathroom, office, and bedroom zones each have one playback device,
while the living room and master bedroom zones each have multiple
playback devices. In the living room zone, playback devices 104,
106, 108, and 110 may be configured to play audio content in
synchrony as individual playback devices, as one or more bonded
playback devices, as one or more consolidated playback devices, or
any combination thereof. Similarly, in the case of the master
bedroom, playback devices 122 and 124 may be configured to play
audio content in synchrony as individual playback devices, as a
bonded playback device, or as a consolidated playback device.
[0058] In one example, one or more playback zones in the
environment of FIG. 1 may each be playing different audio content.
For instance, the user may be grilling in the balcony zone and
listening to hip hop music being played by the playback device 102
while another user may be preparing food in the kitchen zone and
listening to classical music being played by the playback device
114. In another example, a playback zone may play the same audio
content in synchrony with another playback zone. For instance, the
user may be in the office zone where the playback device 118 is
playing the same rock music that is being playing by playback
device 102 in the balcony zone. In such a case, playback devices
102 and 118 may be playing the rock music in synchrony such that
the user may seamlessly (or at least substantially seamlessly)
enjoy the audio content that is being played out-loud while moving
between different playback zones. Synchronization among playback
zones may be achieved in a manner similar to that of
synchronization among playback devices, as described in previously
referenced U.S. Pat. No. 8,234,395.
[0059] As suggested above, the zone configurations of the media
playback system 100 may be dynamically modified, and in some
embodiments, the media playback system 100 supports numerous
configurations. For instance, if a user physically moves one or
more playback devices to or from a zone, the media playback system
100 may be reconfigured to accommodate the change(s). For instance,
if the user physically moves the playback device 102 from the
balcony zone to the office zone, the office zone may now include
both the playback device 118 and the playback device 102. The
playback device 102 may be paired or grouped with the office zone
and/or renamed if so desired via a control device such as the
control devices 126 and 128. On the other hand, if the one or more
playback devices are moved to a particular area in the home
environment that is not already a playback zone, a new playback
zone may be created for the particular area.
[0060] Further, different playback zones of the media playback
system 100 may be dynamically combined into zone groups or split up
into individual playback zones. For instance, the dining room zone
and the kitchen zone 114 may be combined into a zone group for a
dinner party such that playback devices 112 and 114 may render
audio content in synchrony. On the other hand, the living room zone
may be split into a television zone including playback device 104,
and a listening zone including playback devices 106, 108, and 110,
if the user wishes to listen to music in the living room space
while another user wishes to watch television.
c. Example Control Devices
[0061] FIG. 3 shows a functional block diagram of an example
control device 300 that may be configured to be one or both of the
control devices 126 and 128 of the media playback system 100. As
shown, the control device 300 may include a processor 302, memory
304, a network interface 306, a user interface 308, and
microphone(s) 310. In one example, the control device 300 may be a
dedicated controller for the media playback system 100. In another
example, the control device 300 may be a network device on which
media playback system controller application software may be
installed, such as for example, an iPhone.TM., iPad.TM. or any
other smart phone, tablet or network device (e.g., a networked
computer such as a PC or Mac.TM.)
[0062] The processor 302 may be configured to perform functions
relevant to facilitating user access, control, and configuration of
the media playback system 100. The memory 304 may be configured to
store instructions executable by the processor 302 to perform those
functions. The memory 304 may also be configured to store the media
playback system controller application software and other data
associated with the media playback system 100 and the user.
[0063] The microphone(s) 310 may include an audio sensor configured
to convert detected sounds into electrical signals. The electrical
signal may be processed by the processor 302. In one case, if the
control device 300 is a device that may also be used as a means for
voice communication or voice recording, one or more of the
microphone(s) 310 may be a microphone for facilitating those
functions. For instance, the one or more of the microphone(s) 310
may be configured to detect sound within a frequency range that a
human is capable of producing and/or a frequency range audible to
humans. Other examples are also possible.
[0064] In one example, the network interface 306 may be based on an
industry standard (e.g., infrared, radio, wired standards including
IEEE 802.3, wireless standards including IEEE 802.11a, 802.11b,
802.11g, 802.11n, 802.11ac, 802.15, 4G mobile communication
standard, and so on). The network interface 306 may provide a means
for the control device 300 to communicate with other devices in the
media playback system 100. In one example, data and information
(e.g., such as a state variable) may be communicated between
control device 300 and other devices via the network interface 306.
For instance, playback zone and zone group configurations in the
media playback system 100 may be received by the control device 300
from a playback device or another network device, or transmitted by
the control device 300 to another playback device or network device
via the network interface 306. In some cases, the other network
device may be another control device.
[0065] Playback device control commands such as volume control and
audio playback control may also be communicated from the control
device 300 to a playback device via the network interface 306. As
suggested above, changes to configurations of the media playback
system 100 may also be performed by a user using the control device
300. The configuration changes may include adding/removing one or
more playback devices to/from a zone, adding/removing one or more
zones to/from a zone group, forming a bonded or consolidated
player, separating one or more playback devices from a bonded or
consolidated player, among others. Accordingly, the control device
300 may sometimes be referred to as a controller, whether the
control device 300 is a dedicated controller or a network device on
which media playback system controller application software is
installed.
[0066] The user interface 308 of the control device 300 may be
configured to facilitate user access and control of the media
playback system 100, by providing a controller interface such as
the controller interface 400 shown in FIG. 4. The controller
interface 400 includes a playback control region 410, a playback
zone region 420, a playback status region 430, a playback queue
region 440, and an audio content sources region 450. The user
interface 400 as shown is just one example of a user interface that
may be provided on a network device such as the control device 300
of FIG. 3 (and/or the control devices 126 and 128 of FIG. 1) and
accessed by users to control a media playback system such as the
media playback system 100. Other user interfaces of varying
formats, styles, and interactive sequences may alternatively be
implemented on one or more network devices to provide comparable
control access to a media playback system.
[0067] The playback control region 410 may include selectable
(e.g., by way of touch or by using a cursor) icons to cause
playback devices in a selected playback zone or zone group to play
or pause, fast forward, rewind, skip to next, skip to previous,
enter/exit shuffle mode, enter/exit repeat mode, enter/exit cross
fade mode. The playback control region 410 may also include
selectable icons to modify equalization settings, and playback
volume, among other possibilities.
[0068] The playback zone region 420 may include representations of
playback zones within the media playback system 100. In some
embodiments, the graphical representations of playback zones may be
selectable to bring up additional selectable icons to manage or
configure the playback zones in the media playback system, such as
a creation of bonded zones, creation of zone groups, separation of
zone groups, and renaming of zone groups, among other
possibilities.
[0069] For example, as shown, a "group" icon may be provided within
each of the graphical representations of playback zones. The
"group" icon provided within a graphical representation of a
particular zone may be selectable to bring up options to select one
or more other zones in the media playback system to be grouped with
the particular zone. Once grouped, playback devices in the zones
that have been grouped with the particular zone will be configured
to play audio content in synchrony with the playback device(s) in
the particular zone. Analogously, a "group" icon may be provided
within a graphical representation of a zone group. In this case,
the "group" icon may be selectable to bring up options to deselect
one or more zones in the zone group to be removed from the zone
group. Other interactions and implementations for grouping and
ungrouping zones via a user interface such as the user interface
400 are also possible. The representations of playback zones in the
playback zone region 420 may be dynamically updated as playback
zone or zone group configurations are modified.
[0070] The playback status region 430 may include graphical
representations of audio content that is presently being played,
previously played, or scheduled to play next in the selected
playback zone or zone group. The selected playback zone or zone
group may be visually distinguished on the user interface, such as
within the playback zone region 420 and/or the playback status
region 430. The graphical representations may include track title,
artist name, album name, album year, track length, and other
relevant information that may be useful for the user to know when
controlling the media playback system via the user interface
400.
[0071] The playback queue region 440 may include graphical
representations of audio content in a playback queue associated
with the selected playback zone or zone group. In some embodiments,
each playback zone or zone group may be associated with a playback
queue containing information corresponding to zero or more audio
items for playback by the playback zone or zone group. For
instance, each audio item in the playback queue may comprise a
uniform resource identifier (URI), a uniform resource locator (URL)
or some other identifier that may be used by a playback device in
the playback zone or zone group to find and/or retrieve the audio
item from a local audio content source or a networked audio content
source, possibly for playback by the playback device.
[0072] In one example, a playlist may be added to a playback queue,
in which case information corresponding to each audio item in the
playlist may be added to the playback queue. In another example,
audio items in a playback queue may be saved as a playlist. In a
further example, a playback queue may be empty, or populated but
"not in use" when the playback zone or zone group is playing
continuously streaming audio content, such as Internet radio that
may continue to play until otherwise stopped, rather than discrete
audio items that have playback durations. In an alternative
embodiment, a playback queue can include Internet radio and/or
other streaming audio content items and be "in use" when the
playback zone or zone group is playing those items. Other examples
are also possible.
[0073] When playback zones or zone groups are "grouped" or
"ungrouped," playback queues associated with the affected playback
zones or zone groups may be cleared or re-associated. For example,
if a first playback zone including a first playback queue is
grouped with a second playback zone including a second playback
queue, the established zone group may have an associated playback
queue that is initially empty, that contains audio items from the
first playback queue (such as if the second playback zone was added
to the first playback zone), that contains audio items from the
second playback queue (such as if the first playback zone was added
to the second playback zone), or a combination of audio items from
both the first and second playback queues. Subsequently, if the
established zone group is ungrouped, the resulting first playback
zone may be re-associated with the previous first playback queue,
or be associated with a new playback queue that is empty or
contains audio items from the playback queue associated with the
established zone group before the established zone group was
ungrouped. Similarly, the resulting second playback zone may be
re-associated with the previous second playback queue, or be
associated with a new playback queue that is empty, or contains
audio items from the playback queue associated with the established
zone group before the established zone group was ungrouped. Other
examples are also possible.
[0074] Referring back to the user interface 400 of FIG. 4, the
graphical representations of audio content in the playback queue
region 440 may include track titles, artist names, track lengths,
and other relevant information associated with the audio content in
the playback queue. In one example, graphical representations of
audio content may be selectable to bring up additional selectable
icons to manage and/or manipulate the playback queue and/or audio
content represented in the playback queue. For instance, a
represented audio content may be removed from the playback queue,
moved to a different position within the playback queue, or
selected to be played immediately, or after any currently playing
audio content, among other possibilities. A playback queue
associated with a playback zone or zone group may be stored in a
memory on one or more playback devices in the playback zone or zone
group, on a playback device that is not in the playback zone or
zone group, and/or some other designated device.
[0075] The audio content sources region 450 may include graphical
representations of selectable audio content sources from which
audio content may be retrieved and played by the selected playback
zone or zone group. Discussions pertaining to audio content sources
may be found in the following section.
d. Example Audio Content Sources
[0076] As indicated previously, one or more playback devices in a
zone or zone group may be configured to retrieve for playback audio
content (e.g. according to a corresponding URI or URL for the audio
content) from a variety of available audio content sources. In one
example, audio content may be retrieved by a playback device
directly from a corresponding audio content source (e.g., a line-in
connection). In another example, audio content may be provided to a
playback device over a network via one or more other playback
devices or network devices.
[0077] Example audio content sources may include a memory of one or
more playback devices in a media playback system such as the media
playback system 100 of FIG. 1, local music libraries on one or more
network devices (such as a control device, a network-enabled
personal computer, or a networked-attached storage (NAS), for
example), streaming audio services providing audio content via the
Internet (e.g., the cloud), or audio sources connected to the media
playback system via a line-in input connection on a playback device
or network devise, among other possibilities.
[0078] In some embodiments, audio content sources may be regularly
added or removed from a media playback system such as the media
playback system 100 of FIG. 1. In one example, an indexing of audio
items may be performed whenever one or more audio content sources
are added, removed or updated. Indexing of audio items may involve
scanning for identifiable audio items in all folders/directory
shared over a network accessible by playback devices in the media
playback system, and generating or updating an audio content
database containing metadata (e.g., title, artist, album, track
length, among others) and other associated information, such as a
URI or URL for each identifiable audio item found. Other examples
for managing and maintaining audio content sources may also be
possible.
[0079] The above discussions relating to playback devices,
controller devices, playback zone configurations, and media content
sources provide only some examples of operating environments within
which functions and methods described below may be implemented.
Other operating environments and configurations of media playback
systems, playback devices, and network devices not explicitly
described herein may also be applicable and suitable for
implementation of the functions and methods.
III. Maintaining a Database of Signal Processing Algorithms
[0080] As indicated above, some examples discussed herein relate to
maintaining a database of audio processing algorithms. In some
cases, maintenance of a database may further involve generating
and/or updating entries of audio processing algorithms for the
database. Each of the audio processing algorithms in the database
may correspond to one or more characteristics of the playback zone.
In one example, the one or more characteristics of the playback
zone may include an acoustic characteristic of the playback zone.
While the discussions below may generally relate to determining an
audio processing algorithm to be stored as an entry in a database,
one having ordinary skill in the art will appreciate that similar
functions may also be performed to update existing entries in the
database. The database may be accessed to identify an audio
processing algorithm for a playback device to apply when playing
audio content in a particular playback zone.
a. Example Database of Audio Processing Algorithms and
Corresponding Acoustic Characteristics of Playback Zones
[0081] FIG. 5 shows an example flow diagram of a method 500 for
maintaining a database of audio processing algorithms and playback
zone acoustic characteristics. As indicated above, maintaining a
database of audio processing algorithms may involve determining
audio processing algorithms to be stored in the database. Method
500 shown in FIG. 5 presents an embodiment of a method that can be
implemented within an operating environment involving, for example,
the media playback system 100 of FIG. 1, one or more of the
playback device 200 of FIG. 2, and one or more of the control
device 300 of FIG. 3. In one example, the method 500 may be
performed by a computing device that is in communication with a
media playback system, such as the media playback system 100. In
another example, some or all of the functions of method 500 may
alternatively be performed by one or more other computing devices,
such as one or more servers, one or more playback devices, and/or
one or more controller devices.
[0082] Method 500 may include one or more operations, functions, or
actions as illustrated by one or more of blocks 502-510. Although
the blocks are illustrated in sequential order, these blocks may
also be performed in parallel, and/or in a different order than
those described herein. Also, the various blocks may be combined
into fewer blocks, divided into additional blocks, and/or removed
based upon the desired implementation. In addition, for the method
500 and other processes and methods disclosed herein, the flowchart
shows functionality and operation of one possible implementation of
present embodiments. In this regard, each block may represent a
module, a segment, or a portion of program code, which includes one
or more instructions executable by a processor for implementing
specific logical functions or steps in the process. The program
code may be stored on any type of computer readable medium, for
example, such as a storage device including a disk or hard
drive.
[0083] The computer readable medium may include non-transitory
computer readable medium, for example, such as computer-readable
media that stores data for short periods of time like register
memory, processor cache and Random Access Memory (RAM). The
computer readable medium may also include non-transitory media,
such as secondary or persistent long term storage, like read only
memory (ROM), optical or magnetic disks, compact-disc read only
memory (CD-ROM), for example. The computer readable media may also
be any other volatile or non-volatile storage systems. The computer
readable medium may be considered a computer readable storage
medium, for example, or a tangible storage device. In addition, for
the method 500 and other processes and methods disclosed herein,
each block may represent circuitry that is wired to perform the
specific logical functions in the process.
[0084] As shown in FIG. 5, the method 500 involves the computing
device causing a playback device in a playback zone to play a first
audio signal at block 502, receiving data indicating a second audio
signal detected by a microphone of the playback device at block
504, based on the second audio signal and a characteristic of the
playback device, determining an acoustic characteristic of the
playback zone at block 506, based on the acoustic characteristic of
the playback zone, determining an audio processing algorithm at
block 508, and causing to be stored in a database, an association
between the audio processing algorithm and the acoustic
characteristic of the playback zone at block 510.
[0085] As discussed previously, the database may be accessed to
identify an audio processing algorithm for a playback device to
apply when playing audio content in a playback zone. As such, in
one example, the method 500 may be performed for a variety of
different playback zones to build a database of audio processing
algorithms corresponding to a variety of different playback
environments.
[0086] At block 502, the method 500 involves causing a playback
device in a playback zone to play a first audio signal. The
playback device may be a playback device similar to the playback
device 200 shown in FIG. 2. In one case, the computing device may
cause the playback device to play the first audio signal by sending
a command to play the first audio signal. In another case, the
computing device may also provide to the playback device the first
audio signal to be played.
[0087] In one example, the first audio signal may be used for
determining an acoustic response of the playback zone. As such, the
first audio signal may be a test signal or measurement signal
representative of audio content that may be played by the playback
device during regular use by a user. Accordingly, the first audio
signal may include audio content with frequencies substantially
covering a renderable frequency range of the playback device or a
frequency range audible to a human.
[0088] In one example, the playback zone may be a playback zone
representative of one of a plurality of playback environments
within which the playback device may play audio content during
regular use by a user. Referring to FIG. 1, the playback zone may
be representative of any one of the different rooms and zone groups
in the media playback system 100. For instance, the playback zone
may be representative of the dining room.
[0089] In one case, the playback zone may be a model playback zone
built to simulate a listening environment within which the playback
device may play audio content. In one instance, the playback zone
may be one of a plurality of playback zones built to simulate the
plurality of playback environments. The plurality of playback zones
may be built for purposes of populating such a database of audio
processing algorithms. In such a case, certain characteristics of
the playback zone may be predetermined and/or known. For instance,
a dimension of the playback zone, a flooring or wall material of
the playback zone (or other features that may affects an audio
reflectivity characteristic of the playback zone), a number of
furniture in the playback zone, or sizes and types of the furniture
in the playback zone, among other possibilities may be
characteristics of the playback zone that may be predetermined
and/or known.
[0090] In another case, the playback zone may be a room within a
household of a user of the playback device. For instance, as part
of building the database, users of the playback device, such as
customers and/or testers, may be invited to use their playback
devices to perform the functions of method 500 to build the
database. In some cases, the certain characteristics of the user
playback zone may not be known. In some other cases, some or all of
the certain characteristics of the user playback zone may be
provided by the user. The database populated from performing the
functions of method 500 may include entries based on simulated
playback zones and/or user playback zones.
[0091] While block 502 involves the computing device causing the
playback device to play the first audio signal, one having ordinary
skill in the art will appreciate that playback of the first audio
signal by the playback device may not necessarily be caused or
initiated by the computing device. For instance, a controller
device may send a command to the playback device to cause the
playback device to play the first audio signal. In another
instance, the playback device may play the first audio signal
without receiving a command from the computing device or
controller. Other examples are also possible.
[0092] At block 504, the method 500 involves receiving data
indicating a second audio signal detected by a microphone of the
playback device. As indicated above, the playback device may be a
playback device similar to the playback device 200 shown in FIG. 2.
As such, the microphone may be the microphone 220. In one example,
the computing device may receive the data from the playback device.
In another example, the computing device may receive the data via
another playback device, a controller device, or another
server.
[0093] While the playback device is playing the first audio signal,
or shortly thereafter, the microphone of the playback device may
detect the second audio signal. The second audio signal may include
detectable audio signals present in the playback zone. For
instance, the second audio signal may include a portion
corresponding to the first audio signal played by the playback
device.
[0094] In one example, the computing device may receive data
indicating the detected second audio signal from the playback
device as a media stream while the microphone detects the second
audio signal. In another example, the computing device may receive
from the playback device, data indicating the second audio signal
once detection of the first audio signal by the microphone of the
playback device is complete. In either case, the playback device
may process the detected second audio signal (via an audio
processing component, such as the audio processing component 208 of
the playback device 200) to generate the data indicating the second
audio signal, and transmit the data to the computing device. In one
instance, generating the data indicating the second audio signal
may involve converting the second audio signal from an analog
signal to a digital signal. Other examples are also possible.
[0095] At block 506, the method 500 involves based on the second
audio signal and a characteristic of the playback device,
determining an acoustic characteristic of the playback zone. As
indicated above, the second audio signal may include portion
corresponding to the first audio signal played by the playback
device in the playback zone.
[0096] The characteristic of the playback device may include one or
more of an acoustic characteristic of the playback device,
specifications of the playback device (i.e. number of transducers,
frequency range, amplifier wattage, etc.), and a model of the
playback device. In some cases, the acoustic characteristic of the
playback device and/or specifications of the playback device may be
associated with the model of the playback device. For instance, a
particular model of playback devices may have substantially the
same specifications and acoustic characteristics. In one example, a
database of models of playback devices, acoustic characteristics of
the models of playback devices, and/or specifications of the models
of playback devices may be maintained on the computing device or
another device in communication with the computing device.
[0097] In one example, an acoustic response from the playback
device playing the first audio signal in the playback zone may be
represented by a relationship between the first audio signal and
the second audio signal. Mathematically, if the first audio signal
is f(t), the second audio signal is s(t), and the acoustic response
of the playback device playing the first audio signal in the
playback zone is h.sub.r(t), then
s(t)=f(t)h.sub.r(t) (1)
where represents the mathematical function of convolution. As such,
given the second audio signal s(t) that is detected by the
microphone of the playback device, and the first signal f(t) that
was played by the playback device, h.sub.r(t) may be
calculated.
[0098] In one case, because the first audio signal f(t) was played
by the playback device, the acoustic response h.sub.r(t) may
include (i) an acoustic characteristic of the playback device and
(ii) the acoustic characteristic of the playback zone that is
independent of the playback device. Mathematically, this
relationship may be represented as
h.sub.r(t)=h.sub.p(t)+h.sub.room(t) (2)
where h.sub.p(t) is the acoustic characteristic of the playback
device, and h.sub.room(t) is the acoustic characteristic of the
playback zone, independent of the playback device. As such, the
acoustic characteristic of the playback zone that is independent of
the playback device may be determined by removing the acoustic
characteristic of the playback device from the acoustic response of
the playback zone to the first audio signal played by the playback
device. In other words,
h.sub.room(t)=h.sub.r(t)-h.sub.p(t). (3)
[0099] In one example, the acoustic characteristic of the playback
device h.sub.p(t) may be determined by placing the playback device
or a representative playback device of the same model in an
anechoic chamber, causing the playback device to play a measurement
signal in the anechoic chamber, and detecting a response signal by
the microphone of the playback device. The measurement signal
played by the playback device in the anechoic chamber may be
similar to the first audio signal f(t) discussed above. For
instance, the measurement signal may have audio content with
frequencies substantially covering the renderable frequency range
of the playback device or the frequency range audible to a
human.
[0100] The acoustic characteristic of the playback device
h.sub.p(t) may represent a relationship between the played
measurement signal and the detected response signal. For instance,
if the measurement signal has a first signal magnitude at a
particular frequency, and the detected response signal has a second
signal magnitude at the particular frequency different from the
first signal magnitude, then the acoustic characteristic of the
playback device h.sub.p(t) may indicate signal amplification or
attenuation at the particular frequency.
[0101] Mathematically, if the measurement signal is x(t), the
detected response signal is y(t), and the acoustic characteristic
of the playback device in the anechoic chamber is h.sub.p(t),
then
y(t)=x(t)h.sub.p(t). (4)
Accordingly, h.sub.p(t) may be calculated based on the measurement
signal x(t) and the detected response signal y(t). As indicated
above, h.sub.p(t) may be the representative acoustic characteristic
for playback devices of the same model as that used in the anechoic
chamber.
[0102] In one example, as indicated above, the reference acoustic
characteristic h.sub.p(t) may be stored in association with the
model of the playback device and/or specifications of the playback
device. In one example, h.sub.p(t) maybe stored on the computing
device. In another example, h.sub.p(t) may be stored on the
playback device and other playback devices of the same model. In a
further case, an inverse of h.sub.p(t), represented as
h.sub.p.sup.-1(t), may be stored instead of h.sub.p(t).
[0103] Referring back to block 506, the acoustic characteristic of
the playback zone h.sub.room(t) may accordingly be determined based
on the first audio signal f(t), the second audio signal s(t), and
the acoustic characteristic h.sub.p(t) of the playback device. In
one example, the inverse of the acoustic characteristic of the
playback device, h.sub.p.sup.-1(t) may be applied to the equation
(2). In other words,
h p - 1 ( t ) h r ( t ) = h p - 1 ( t ) h p ( t ) + h p - 1 ( t ) h
room ( t ) = I ( t ) + h p - 1 ( t ) h room ( t ) ( 5 )
##EQU00001##
where I(t) is an impulse signal. The acoustic characteristic of the
playback zone h.sub.room(t) may then be simplified as:
h.sub.room(t)=h.sub.p(t)[h.sup.-1p(t)h.sub.r(t)-I(t)]. (6)
[0104] At block 506, the method 500 involves based on the acoustic
characteristic of the playback zone and a predetermined audio
signal, determining an audio processing algorithm. In one example,
the audio processing algorithm may be determined such that an
application of the determined audio processing algorithm by the
playback device when playing the first audio signal in the playback
zone may produce a third audio signal having an audio
characteristic substantially the same as a predetermined audio
characteristic, or assumes the predetermined audio characteristic,
at least to some extent.
[0105] In one example, the predetermined audio characteristic may
be an audio frequency equalization that is considered
good-sounding. In one case, the predetermined audio characteristic
may involve an equalization that is substantially even across the
renderable frequency range of the playback device. In another case,
the predetermined audio characteristic may involve an equalization
that is considered pleasing to a typical listener. In a further
case, the predetermined audio characteristic may involve a
frequency response that is considered suitable for a particular
genre of music.
[0106] Whichever the case, the computing device may determine the
audio processing algorithm based on the acoustic characteristic and
the predetermined audio characteristic. In one example, if the
acoustic characteristic of the playback zone is one in which a
particular audio frequency is more attenuated than other
frequencies, and the predetermined audio characteristic involves an
equalization in which the particular audio frequency is minimally
attenuated, the corresponding audio processing algorithm may
involve an increased amplification at the particular audio
frequency.
[0107] If the predetermined audio characteristic is represented by
a predetermined audio signal z(t), and the audio processing
algorithm is represented by p(t), a relationship between the
predetermined audio signal z(t), the audio processing algorithm,
and the acoustic characteristic of the playback zone h.sub.room(t)
may be mathematically described as:
z(t)=p(t)h.sub.room(t). (7)
Accordingly, the audio processing algorithm p(t) may be
mathematically described as:
p(t)=z(t)h.sub.room.sup.-1(t) (8)
[0108] In some cases, determining the audio processing algorithm
may involve determining one or more parameters for the audio
processing algorithm (i.e. coefficients for p(t)). For instance,
the audio processing algorithm may include certain signal
amplification gains at certain corresponding frequencies of the
audio signal. As such, parameters indicating the certain signal
amplification and/or the certain corresponding frequencies of the
audio signal may be identified to determine the audio processing
algorithm p(t).
[0109] At block 510, the method 500 involves causing to be stored
in a database, an association between the audio processing
algorithm and the acoustic characteristic of the playback zone. As
such, an entry that includes the acoustic characteristic of the
playback zone h.sub.room(t), and the corresponding audio processing
algorithm p(t) as determined at block 504 and 506 may be added to
the database. In one example, the database may be stored on local
memory storage of the computing device. In another example, if the
database is stored on another device, the computing device may
transmit the audio processing algorithm and acoustic characteristic
of the playback zone to the other device to be stored in the
database. Other examples are also possible.
[0110] As indicated above, the playback zone for which the audio
processing algorithm was determined may be a model playback zone
used to simulate a listening environment within which the playback
device may play audio content, or a room of a user of the playback
device. In some cases, the database may include entries generated
based on audio signals played and detected within model playback
zones as well as entries generated based on audio signals played
and detected within a room of a user of a playback device.
[0111] FIG. 6A shows an example portion of a database 600 of audio
processing algorithms, within which the audio processing algorithm
p(t) determined in the discussions above may be stored. As shown,
the portion of the database 600 may include a plurality of entries
602-608. The entry 602 may include a playback zone acoustic
characteristic h.sub.room.sup.-1(t)-1. The acoustic characteristic
h.sub.room.sup.-1(t)-1 may be a mathematical representation of the
acoustic characteristic of a playback zone, as calculated based on
an audio signal detected by a playback device and a characteristic
of the playback device as described above. Corresponding to the
acoustic characteristic h.sub.room.sup.-1(t)-1 in entry 602 may be
coefficients w.sub.1, x.sub.1, y.sub.1, and z.sub.1 for an audio
processing algorithm determined based on the acoustic
characteristic h.sub.room.sup.-1(t)-1 and a predetermined audio
characteristic, as also described above.
[0112] As further shown, entry 604 of the database 600 may include
a playback zone acoustic characteristic h.sub.room.sup.-1(t)-2 and
processing algorithm coefficients w.sub.2, x.sub.2, y.sub.2, and
z.sub.2, entry 606 of the database 600 may include a playback zone
acoustic characteristic h.sub.room.sup.-1(t)-3 and processing
algorithm coefficients w.sub.3, x.sub.3, y.sub.3, and z.sub.3, and
entry 608 of the database 600 may include a playback zone acoustic
characteristic h.sub.room.sup.-1(t)-4 and processing algorithm
coefficients w.sub.4, x.sub.4, y.sub.4, and z.sub.4.
[0113] One having ordinary skill in the art will appreciate that
database 600 is just one example of a database that may be
populated and maintained by performing the functions of method 500.
In one example, the playback zone acoustic characteristics may be
stored in a different format or mathematical state (i.e. inversion
vs non-inverse functions). In another example, the audio processing
algorithms may be stored as function and/or equalization functions.
Other examples are also possible.
[0114] In one example, some of the functions described above may be
performed multiple times for the same playback device in the same
playback zone to determine the acoustic characteristic of the
playback zone h.sub.room(t) and the corresponding processing
algorithm p(t). For instance, blocks 502-506 may be performed
multiple times to determine a plurality of acoustic characteristics
of the playback zone. A combined (i.e. averaged) acoustic
characteristic of the playback zone may be determined from the
plurality of acoustic characteristics, and the corresponding
processing algorithm p(t) may be determined based on the combined
acoustic characteristic of the playback zone. An association
between the corresponding processing algorithm p(t) and the
acoustic characteristic of the playback zone h.sub.room(t) or
h.sub.room.sup.-1(t) may then be stored in the database. In some
cases, the first audio signal played by the playback device in the
playback zone may be substantially the same audio signal during
each of the iterations of the functions. In some other cases, the
first audio signal played by the playback device in the playback
zone may be a different audio signal for some or each of the
iterations of the functions. Other examples are also possible.
[0115] The method 500 as described above (or some variation of the
method 500) may further be performed to generate other entries in
the database. For instance, given that the playback device is a
first playback device, the playback zone is a first playback zone,
and the audio processing algorithm is a first audio processing
algorithm, the method 500 may additionally or alternatively be
performed using a second playback device in a second playback zone.
In one example, the second playback device may play a fourth audio
signal in the second playback zone and a microphone of the second
playback device may detected a fifth audio signal that includes a
portion of the fourth audio signal played by the second playback
device. The computing device may then receive data indicating the
fifth audio signal and determine an acoustic characteristic of the
second playback zone based on the fifth audio signal and a
characteristic of the second playback device.
[0116] Based on the acoustic characteristic of the second playback
zone, the computing device may determine a second audio processing
algorithm such that applying of the determined second audio
processing algorithm by the second playback device when playing the
fourth audio signal in the playback zone produces a sixth audio
signal having an audio characteristic substantially the same as the
predetermined audio characteristic, represented by the
predetermined audio signal z(t) shown in equations (7) and (8). The
computing device may then cause to be stored in the database, an
association between the second audio processing algorithm and the
acoustic characteristic of the second playback zone.
[0117] While many playback zones may be similar in dimension,
building material, and/or furniture types and arrangements, it is
unlikely that two playback zones will have the same exact playback
zone acoustic characteristic. As such, rather than storing an
individual entry for each unique playback zone acoustic
characteristic and their respective corresponding audio processing
algorithms, which may require an impractical amount of memory
storage, entries for similar or substantially the same playback
zone acoustic characteristics may be combined.
[0118] In one case, acoustic characteristics of two playback zones
may be similar when the two playback zones are substantially
similar rooms. In another case, the computing device may, as
suggested above, be performing the method 500 for the same playback
device in the same playback zone multiple times. In a further case,
the computing device may be performing method 500 for a different
playback device in the same playback zone. In yet another case, the
computing device may be performing method 500 for the playback
device in the same playback zone, but in a different location
within the playback zone. Other examples are also possible.
[0119] Whichever the case, during the process of generating entries
of playback zone acoustic characteristic and corresponding audio
processing algorithms, the computing device may determine that two
playback zones have substantially the same playback zone acoustic
characteristics. The computing device may then responsively
determine a third audio processing algorithm based on the first
audio processing algorithm and the second audio processing
algorithm. For example, the computing device may determine the
third audio processing algorithm by taking an average of the
parameters of the first and second audio processing algorithms.
[0120] The computing device may then store in the database, an
association between the third audio processing algorithm and the
substantially the same acoustic characteristics. In one example,
the database entry for the third audio processing algorithm may
have a corresponding acoustic characteristic determined based on an
average of the two substantially the same acoustic characteristics.
In some cases, as suggested above, the database may have only one
entry for the substantially the same acoustic characteristics in
the interest of conserving storage memory. As such, the entries for
the acoustic characteristics of the first playback zone and the
second playback zone may be discarded in favor of the entry for the
third audio processing algorithm. Other examples are also
possible.
[0121] While the discussions above generally refer to the method
500 as being performed by a computing device, one having ordinary
skill in the art will appreciate that, as indicated above, the
functions of method 500 may alternatively be performed by one or
more other devices, such as one or more servers, one or more
playback devices, and/or one or more controller devices. In other
words, one or more of the blocks 502-510 may be performed by the
computing device, while one or more others of the blocks 502-510
may be performed by one or more other computing devices.
[0122] In one example, as described above, playback of the first
audio signal by the playback device at block 502 may be performed
by the playback device without any external command. Alternatively,
the playback device may play the first audio signal in response to
a command from a controller device and/or another playback device.
In another example, blocks 502-506 may be performed by one or more
playback devices or one or more controller devices, and the
computing device may perform block 508 and 510. In yet another
example, blocks 502-508 may be performed by one or more playback
devices or one or more controller devices, and the computing device
may only perform the functions of storing the audio processing
algorithm at block 510. Other examples are also possible.
b. Example Database of Audio Processing Algorithms and
Corresponding One or More Characteristics of Playback Zones
[0123] As indicated previously, a playback zone may have one or
more playback zone characteristics. The one or more playback zone
characteristics may include an acoustic characteristic of the
playback zone, as discussed above. The one or more characteristics
of the playback zone may also include one or more of a (a) a
dimension of the playback zone, (b) an audio reflectivity
characteristic of the playback zone (c) an intended use of the
playback zone, (d) a number of furniture in the playback zone, (e)
size of furniture in the playback zone, and (f) types of furniture
in the playback zone. In one case, the audio reflectivity
characteristic of the playback zone may be related to flooring
and/or wall materials of the playback zone.
[0124] In some examples, an association between a determined audio
processing algorithm, such as p(t) discussed above, and additional
one or more characteristics of the playback zone may be stored in
the database. FIG. 7 shows an example flow diagram of a method 700
for maintaining a database of audio processing algorithms and the
one or more characteristics of the playback zone. Method 700 shown
in FIG. 7 presents an embodiment of a method that can be
implemented within an operating environment involving, for example,
the media playback system 100 of FIG. 1, one or more of the
playback device 200 of FIG. 2, and one or more of the control
device 300 of FIG. 3. In one example, the method 700 may be
performed by a computing device that is in communication with a
media playback system, such as the media playback system 100. In
another example, some or all of the functions of method 700 may
alternatively be performed by one or more other computing devices,
such as one or more servers, one or more playback devices, and/or
one or more controller devices.
[0125] Method 700 may include one or more operations, functions, or
actions as illustrated by one or more of blocks 702-708. Although
the blocks are illustrated in sequential order, these blocks may
also be performed in parallel, and/or in a different order than
those described herein. Also, the various blocks may be combined
into fewer blocks, divided into additional blocks, and/or removed
based upon the desired implementation.
[0126] As shown in FIG. 7, the method 700 involves causing a
playback device in a playback zone to play a first audio signal at
block 702, receiving (i) data indicating one or more
characteristics of a playback zone, and (ii) data indicating a
second audio signal detected by a microphone of the playback device
at block 704, based on the second audio signal and a characteristic
of the playback device, determining an audio processing algorithm
at block 706, and causing to be stored in a database, an
association between the determined audio processing algorithm and
at least one of the one or more characteristics of the playback
zone at block 708.
[0127] At block 702, the method 700 involves the computing device
causing a playback device in a playback zone to play a first audio
signal. In one example, block 702 may include the same, or
substantially the same functions as that of block 502 described in
connection to FIG. 5. For instance, the first audio signal may
include audio content with frequencies substantially covering a
renderable frequency range of the playback device or a frequency
range audible to a human. As such, any discussions above in
connection to block 502 may also be application to block 702.
[0128] At block 704, the method 700 involves receiving (i) data
indicating one or more characteristics of the playback zone, and
(ii) data indicating a second audio signal detected by a microphone
of the playback device. In one example, block 704 may include the
same, or substantially the same functions as that of block 504
described in connection to FIG. 5. For instance, the second audio
signal may include a portion corresponding to the first audio
signal played by the playback device. As such, any discussions
above in connection to block 504 may also be applicable to block
704.
[0129] In addition to that described previously in connection to
block 504, block 704 also involves receiving data indicating one or
more characteristics of the playback zone. As indicated above, the
playback zone may be a model playback zone used to simulate a
listening environment within which the playback device may play
audio content. In such a case, some of the one or more playback
zone characteristics for the playback zone may be known. For
instance, dimensions, floor plan, construction materials, and
furnishings for the playback zone may be known. In one case, model
playback zones may be constructed for the purposes of determining
audio processing algorithms for a database, in which case, some of
the one or more playback zone characteristics may be predetermined.
In another case, the playback zone may be a room of a user of the
playback device. As indicated above, such characteristics of the
playback zone may contribute to the acoustic characteristic of the
playback zone.
[0130] In one example, the computing device may receive the data
indicating the one or more playback zone characteristics via a
controller interface of a controller devices used by a user or an
acoustics engineer. In another example, the computing device may
receive the data indicating the one or more characteristics of the
playback zone from the playback device in the playback zone. For
instance, the data indicating the one or more characteristics may
be received along with data indicating the second audio signal. The
data indicating the one or more playback zone characteristics may
be received before, during, or after playback of the first audio
signal by the playback device at block 702. Other examples are also
possible.
[0131] At block 706, the method 700 involves based on the second
audio signal and a characteristic of the playback device,
determining an audio processing algorithm. In one example, block
706 may include the same or similar functions as that described
above in blocks 506 and 508 of FIG. 5. For instance, determining
the audio processing algorithm may involve based on the second
audio signal and a characteristic of the playback device,
determining an acoustic characteristic of the playback zone, then
based on the acoustic characteristic of the playback zone,
determine an audio processing algorithm. The characteristic of the
playback device, as indicated above, may include one or more of an
acoustic characteristic of the playback device, specifications of
the playback device, and a model of the playback device.
[0132] As discussed previously, application of the determined audio
processing algorithm by the playback device when playing the first
audio signal in the playback zone may produce a third audio signal
having an audio characteristic substantially the same as a
predetermined audio characteristic, or assumes the predetermined
audio characteristic, at least to some extent. In one case, the
predetermined audio characteristic may be the same or substantially
the same as the predetermined audio characteristic represented as
the predetermined audio signal p(t) discussed above. Other examples
are also possible.
[0133] At block 708, the method 800 involves causing to be stored
in a database, an association between the determined audio
processing algorithm and at least one of the one or more
characteristics of the playback zone. In one example, block 708 may
include the same or similar functions as that described above in
blocks 510. In this case however, the computing device may cause to
be stored in the database, an association between the audio
processing algorithm and at least one of the one or more
characteristics in addition to, or instead of the acoustic
characteristic of the playback zone.
[0134] As indicated above, the playback zone for which the audio
processing algorithm was determined may be a model playback zone
used to simulate a listening environment within which the playback
device may play audio content, or a room of a user of the playback
device. In some cases, the database may include entries generated
based on audio signals played and detected within model playback
zones as well as entries generated based on audio signals played
and detected within a room of a user of a playback device.
[0135] FIG. 6B shows an example portion of a database 650 of audio
processing algorithms, within which the audio processing algorithm
and associations between the audio processing algorithms and
playback zone acoustic characteristics determined in the
discussions above may be stored. As shown, the portion of the
database 650 may include a plurality of entries 652-658, similar to
the entries 602-608 of the database 600. For instance, entries 652
and 602 may have the same playback zone acoustic characteristic,
and the same audio processing algorithm coefficients, entries 654
and 604 may have the same playback zone acoustic characteristic,
and the same audio processing algorithm coefficients, entries 656
and 606 may have the same playback zone acoustic characteristic,
and the same audio processing algorithm coefficients, and entries
658 and 608 may have the same playback zone acoustic
characteristic, and the same audio processing algorithm
coefficients.
[0136] In addition to the playback zone acoustic characteristics,
the database 650 may also include zone dimensions information,
indicating dimensions of the playback zone having the corresponding
playback zone acoustic characteristic and the audio processing
algorithm determined based on the corresponding playback zone
acoustic characteristic. For instance, as shown, the entry 652 may
have a zone dimension of a.sub.1.times.b.sub.1.times.c.sub.1, the
entry 654 may have a zone dimension of
a.sub.2.times.b.sub.2.times.c.sub.2, the entry 656 may have a zone
dimension of a.sub.3.times.b.sub.3.times.c.sub.3, and the entry 654
may have a zone dimension of a.sub.4.times.b.sub.4.times.c.sub.4.
As such, in this example, the one or more characteristics stored in
association with the determined audio processing algorithm include
the acoustic characteristic of the playback zone and dimensions of
the playback zone. Other examples are also possible.
[0137] One having ordinary skill in the art will appreciate that
database 650 is just one example of a database that may be
populated and maintained by performing the functions of method 700.
In one example, the playback zone acoustic characteristics may be
stored in a different format or mathematical state (i.e. inversion
vs non-inverse functions). In another example, the audio processing
algorithms may be stored as function and/or equalization functions.
In yet another example, the database 650 may include only zone
dimensions and corresponding audio processing algorithms, and not
the corresponding acoustic characteristics of the playback zone.
Other examples are also possible.
[0138] Similar to method 500, the method 700 as described above (or
some variation of the method 700) may further be performed to
generate other entries in the database. For instance, given that
the playback device is a first playback device, the playback zone
is a first playback zone, and the audio processing algorithm is a
first audio processing algorithm, the method 600 may additionally
or alternatively be performed using a second playback device in a
second playback zone. In one example, the second playback device
may play a fourth audio signal in the second playback zone and a
microphone of the second playback device may detect a fifth audio
signal that includes a portion of the fourth audio signal played by
the second playback device. The computing device may then receive
(i) data indicating one or more characteristics of the second
playback zone, and (ii) data indicating the fifth audio signal
detected by a microphone of a second playback device in the second
playback zone.
[0139] The computing device may then determine an acoustic
characteristic of the second playback zone based on the fifth audio
signal and a characteristic of the second playback device. Based on
the acoustic characteristic of the second playback zone, the
computing device may determine a second audio processing algorithm
such that applying of the determined second audio processing
algorithm by the second playback device when playing the fourth
audio signal in the playback zone produces a sixth audio signal
having an audio characteristic substantially the same as the
predetermined audio characteristic, represented by the
predetermined audio signal z(t) shown in equations (7) and (8). The
computing device may then cause to be stored in a database, an
association between the second audio processing algorithm and at
least one of the one or more characteristics of the second playback
zone.
[0140] Similar to that discussed above in connection to the method
500, during the process of generating entries for the database, the
computing device may determine that two playback zones have similar
or substantially the same playback zone acoustic characteristics.
Accordingly, as also discussed above, the computing device may
combine the playback zone acoustic characteristics and determined
audio processing algorithms corresponding to the playback zone
acoustic characteristics (i.e. by averaging), and store the
combined playback zone acoustic characteristic and combined audio
processing algorithm as a single entry in the database. Other
examples are also possible.
[0141] Similar to the case of method 500, while the discussions
above generally refer to the method 700 as being performed by a
computing device, one having ordinary skill in the art will
appreciate that the functions of method 700 may alternatively be
performed by one or more other computing devices, such as one or
more servers, one or more playback devices, and/or one or more
controller devices. In other words, one or more of the blocks
702-708 may be performed by the computing device, while one or more
others of the blocks 702-708 may be performed by one or more other
computing devices. The other computing devices may include one or
more playback devices, one or more controller devices, and/or one
or more servers.
[0142] In one example, as described above, playback of the first
audio signal by the playback device at block 702 may be performed
by the playback device without any external command. Alternatively,
the playback device may play the first audio signal in response to
a command from a controller device and/or another playback device.
In another example, blocks 702-706 may be performed by one or more
playback devices or one or more controller devices, and the
computing device may perform block 708. Other examples are also
possible.
IV. Calibrating a Playback Device Based on Playback Zone
Characteristics
[0143] As indicated above, some examples described herein involve
calibrating a playback device for a playback zone. In some cases,
calibration of the playback deice may involve determining an audio
processing algorithm for the playback device to apply when playing
audio content in the playback zone.
[0144] FIG. 8 shows an example playback environment 800 within
which a playback device may be calibrated. As shown, the playback
environment 800 includes a computing device 802, playback devices
804 and 806, controller device 808, and a playback zone 810.
Playback devices 804 and 806 may be similar to the playback device
200 shown in FIG. 2. As such, playback devices 804 and 806 may each
have a microphone, such as the microphone 220. In some cases, only
one of the playback devices 804 and 806 may have a microphone.
[0145] In one example, playback devices 804 and 806 may be part of
a media playback system and may be configured to play audio content
in synchrony, such as that shown and discussed above in connection
to the media playback system 100 of FIG. 1. In one case, playback
devices 804 and 806 may be grouped together to play audio content
in synchrony within the playback zone 810. Referring again to FIG.
1, the playback zone 810 may be any one or more of the different
rooms and zone groups in the media playback system 100. For
instance, the playback zone 810 may be the master bedroom. In such
a case, the playback devices 804 and 806 may correspond to the
playback devices 122 and 124, respectively.
[0146] In one example, the controller device 808 may be a device
that can be used to control the media playback system. In one case,
the controller device 808 may be similar to the control device 300
of FIG. 3. While the controller device 808 of FIG. 8 is shown to be
inside the playback zone 810, the controller device 808 may be
outside of the playback zone 810, or moving in or out of the
playback zone 810 while communicating with the playback device 804,
the playback device 806, and or any other device in the media
playback system.
[0147] In one example, the computing device 802 may be a server in
communication with the media playback system. The computing device
802 may be configured to maintain a database of information
associated with the media playback system, such as registration
numbers associated with the playback devices 804 and 806. The
computing device 802 may also be configured to maintain a database
of audio processing algorithms, as described in the previous
section. Other examples are also possible.
[0148] Methods 900, 1000, and 1100, as will be discussed below
provide functions that may be performed for calibration of a
playback device in a playback zone, such as the playback devices
804 and 806 in the playback zone 810.
a. First Example Method for Determining an Audio Processing
Algorithm Based on a Detected Audio Signal
[0149] FIG. 9 shows an example flow diagram of a method 900 for
determining an audio processing algorithm based on one or more
playback zone characteristics. Method 900 shown in FIG. 9 presents
an embodiment of a method that can be implemented within an
operating environment involving, for example, the media playback
system 100 of FIG. 1, one or more of the playback device 200 of
FIG. 2, one or more of the control device 300 of FIG. 3, and the
playback environment 800 of FIG. 8. In one example, the method 900
may be performed by a computing device in communication with a
media playback system. In another example, some or all of the
functions of method 900 may alternatively be performed by one or
more other computing devices, such as one or more servers, one or
more playback devices, and/or one or more controller devices
associated with the media playback system.
[0150] Method 900 may include one or more operations, functions, or
actions as illustrated by one or more of blocks 902-908. Although
the blocks are illustrated in sequential order, these blocks may
also be performed in parallel, and/or in a different order than
those described herein. Also, the various blocks may be combined
into fewer blocks, divided into additional blocks, and/or removed
based upon the desired implementation.
[0151] As shown in FIG. 9, the method 900 involves a causing a
playback device in a playback zone to play a first audio signal at
block 902, receiving from the playback device, data indicating a
second audio signal detected by a microphone of the playback device
at block 904, based on the second audio signal and an acoustic
characteristic of the playback device, determining an audio
processing algorithm at block 906, and transmitting data indicating
the determined audio processing algorithm to the playback device at
block 908.
[0152] At block 902, the method 900 involves causing a playback
device in a playback zone to play a first audio signal. Referencing
FIG. 8, the playback device may be the playback device 804, and the
playback zone may be the playback zone 810. As such, the playback
device may be a playback device similar to the playback device 200
shown in FIG. 2.
[0153] In one example, the computing device 802 may determine that
the playback device 804 is to be calibrated for the playback zone
810, and responsively cause the playback device 804 in the playback
zone 810 to play the first audio signal. In one case, the computing
device 802 may determine that the playback device 804 is to be
calibrated based on an input received from a user indicating that
the playback device 804 is to be calibrated. In one instance, the
input may be received from the user via the controller device 808.
In another case, the computing device 802 may determine that the
playback device 804 is to be calibrated because the playback device
804 is a new playback device, or newly moved to the playback zone
810. In a further case, calibration of the playback device 804 (or
any other playback devices in the media playback system) may be
performed periodically. As such, the computing device 802 may
determine that the playback device 804 is to be calibrated based on
a calibration schedule of the playback device 804. Other examples
are also possible. Responsive to determining that the playback
device 804 is to be calibrated, the computing device 802 may then
cause the playback device 804 to play the first audio signal.
[0154] While block 902 involves the computing device 802 causing
the playback device 804 to play the first audio signal, one having
ordinary skill in the art will appreciate that playback of the
first audio signal by the playback device 804 may not necessarily
be caused or initiated by the computing device 802. For instance,
the controller device 808 may send a command to the playback device
804 to cause the playback device 804 to play the first audio
signal. In another instance, the playback device 806 may cause the
playback device 804 to play the first audio signal. In a further
instance, the playback device 804 may play the first audio signal
without receiving a command from the computing device 802, playback
device 806, or controller device 808. In one example, the playback
device 804 may determine, based on a movement of the playback
device 804, or a change in the playback zone of the playback device
804, that a calibration is to be performed, and responsive play the
first audio signal. Other examples are also possible.
[0155] As suggested, the first audio signal may be a test signal or
measurement signal for calibrating the playback device 804 for the
playback one 810. As such, the first audio signal may be
representative of audio content that may be played by the playback
device during regular use by a user. Accordingly, the first audio
signal may include audio content with frequencies substantially
covering a renderable frequency range of the playback device or a
frequency range audible to a human. In another example, the first
audio signal may be a favorite or commonly played audio track of a
user of the playback device.
[0156] At block 904, the method 900 involves receiving from the
playback device, a second audio signal detected by a microphone of
the playback device. Continuing with the examples above, given the
playback device 804 is similar to the playback device 200 of FIG.
2, the microphone of the playback device 804 may be similar to the
microphone 220 of the playback device 200. In one example, the
computing device 802 may receive the data from the playback device
804. In another example, the computing device 804 may receive the
data via another playback device such as the playback device 806, a
controller device such as the controller device 808, or another
computing device, such as another server.
[0157] While the playback device 804 is playing the first audio
signal, or shortly thereafter, the microphone of the playback
device 804 may detect the second audio signal. The second audio
signal may include sounds present in the playback zone. For
instance, the second audio signal may include a portion
corresponding to the first audio signal played by the playback
device 804.
[0158] In one example, the computing device 802 may receive data
indicating the first audio signal from the playback device 804 as a
media stream while the microphone detects the second audio signal.
In another example, the computing device 802 may receive from the
playback device 804, data indicating the second audio signal once
detection of the second audio signal by the microphone of the
playback device 804 is complete. In either case, the playback
device 804 may process the detected second audio signal (via an
audio processing component, such as the audio processing component
208 of the playback device 200) to generate the data indicating the
second audio signal, and transmit the data to the computing device
802. In one instance, generating the data indicating the second
audio signal may involve converting the second audio signal from an
analog signal to a digital signal. Other examples are also
possible.
[0159] At block 906, the method 900 involves based on the second
audio signal and an acoustic characteristic of the playback device,
determining an audio processing algorithm. In one example, the
acoustic characteristic of the playback device may be h.sub.p(t) as
discussed above in connection to block 506 of the method 500 shown
in FIG. 5. For instance, as described above, the acoustic
characteristic of the playback device may be determined by causing
a reference playback device in an anechoic chamber to play a
measurement signal, receiving from the reference playback device,
data indicating an audio signal detected by a microphone of the
reference playback device, and determining the acoustic
characteristic of the playback device based on comparison between
detected audio signal and the measurement signal.
[0160] As suggested above, the reference playback device may be of
the same model as the playback device 804 being calibrated for the
playback zone 810. Also similar to that discussed above in
connection to block 506, the computing device may accordingly
determine an acoustic characteristic of the playback zone based on
the acoustic characteristic of the playback device and the second
audio signal.
[0161] In one example, the computing device 802 may determine an
audio processing algorithm based on the acoustic characteristic of
the playback zone similar to that discussed above in connection to
block 508. As such, the computing device 802 may determine the
audio processing algorithm based on the acoustic characteristic of
the playback zone and a predetermined audio characteristic, such
that an application of the determined audio processing algorithm by
the playback device 804 when playing the first audio signal in the
playback zone 810 may produce a third audio signal having an audio
characteristic substantially the same as the predetermined audio
characteristic, or assumes the predetermined audio characteristic,
at least to some extent.
[0162] In another example, the computing device 802 may select from
a plurality of audio processing algorithms, an audio processing
algorithm corresponding to the acoustic characteristic of the
playback zone 810. For instance, the computing device may access a
database, such as the databases 600 and 650 of FIGS. 6A and 6B,
respectively, and identify an audio processing algorithm based on
the acoustic characteristic of the playback zone 810. For instance,
referring to the database 600 of FIG. 6A, if the acoustic
characteristic of the playback zone 810 is determined as
h.sub.room.sup.-(t)-3, then the audio processing algorithm having
coefficients w.sub.3, x.sub.3, y.sub.3, and z.sub.3 of database
entry 606 may be identified.
[0163] In some cases, an acoustic characteristic that exactly
matches the determined acoustic characteristic of the playback zone
810 may not be found in the database. In such a case, an audio
processing algorithm corresponding to an acoustic characteristic in
the database that is the most similar to the acoustic
characteristic of the playback zone 810 may be identified. Other
examples are also possible.
[0164] At block 908, the method 900 involves transmitting data
indicating the determined audio processing algorithm to the
playback device. Continuing with the examples above, the computing
device 802 (or one or more other devices) may transmit the data
indicating the determined audio processing algorithm to the
playback device 804. The data indicating the determined audio
processing algorithm may also include a command to cause the
playback device 804 to apply the determined audio processing
algorithm when playing audio content in the playback zone 810. In
one example, applying the audio processing algorithm to the audio
content may modify a frequency equalization of the audio content.
In another example, applying the audio processing algorithm to the
audio content may modify a volume range of the audio content. Other
examples are also possible.
[0165] In some cases, a playback zone may include multiple playback
devices configured to play audio content in synchrony. For
instance, as indicated above, playback devices 804 and 806 may be
configured to play audio content in synchrony in the playback zone
810. In such a case, calibration of one of the playback devices may
involve the other playback devices.
[0166] In one example, a playback zone such as the playback zone
810 may include a first playback device, such as the playback
device 804, and a second playback device, such as the playback
device 806, configured to play audio content in synchrony.
Calibration of the playback device 804, as coordinated and
performed by the computing device 802, may involve causing the
playback device 804 to play a first audio signal and causing the
playback device 806 to play a second audio signal.
[0167] In one case, the computing device 802 may cause the playback
device 806 to play the second audio signal in synchrony with
playback of the first audio signal by the playback device 804. In
one instance, the second audio signal may be orthogonal to the
first audio signal such that a component of the synchronously
played audio content played by either of the playback devices 804
and 806 may be discernable. In another case, the computing device
may cause the playback device 806 to play the second audio signal
after playback of the first audio signal by the playback device 804
is complete. Other examples are also possible.
[0168] The computing device 802 may then receive from the playback
device 804, a third audio signal detected by a microphone of the
playback device 804, similar to that discussed in connection to
block 904. In this case however, the third audio signal may include
both a portion corresponding to the first audio signal played by
the playback device 804, and a portion corresponding to the second
audio signal played by playback device 806.
[0169] Based on the third audio signal and an acoustic
characteristic of the playback device 804, the computing device 802
may then determine an audio processing algorithm, and transmit data
indicating the determined audio processing algorithm to the
playback device 804 for the playback device 804 to apply when
playing audio content in the playback zone 810, similar to that
described above in connection to blocks 906 and 908.
[0170] In one case, as indicated above, the playback device 806 may
also have a microphone and may also be calibrated similarly to that
described above. As indicated, the first audio signal played by the
playback device 804 and the second audio signal played by the
playback device 806 may be orthogonal, or otherwise discernable.
For instance, as also indicated above, the playback device 806 may
play the second audio signal after playback of the first audio
signal by the playback device 804 is completed. In another
instance, the second audio signal may have a phase that is
orthogonal to a phase of the first audio signal. In yet another
instance, the second audio signal may have a different and/or
varying frequency range than the first audio signal. Other examples
are also possible.
[0171] Whichever the case, discernable first and second audio
signals may allow the computing device 802 to parse from the third
audio signal detected by the playback device 804, a contribution of
the playback device 804 to the detected third audio signal, and a
contribution of the playback device 806 to the detected third audio
signal. Respective audio processing algorithms may then be
determined for the playback device 804 and the playback device
806.
[0172] The respective audio processing algorithms may be determined
similar to that discussed above in connection to block 508. In one
case, a first acoustic characteristic of the playback zone may be
determined based on the third audio signal detected by the playback
device 604, and a second acoustic characteristic of the playback
zone may be determined based on a fourth audio signal detected by
the playback device 806. Similar to the third audio signal,the
fourth audio signal may also include a portion corresponding to the
first audio signal played by the playback device 804 and a portion
corresponding to the second audio signal played by the playback
device 806.
[0173] Respective audio processing algorithms for the playback
device 804 and the playback device 806 may then be determined based
on the first acoustic characteristic of the playback zone and the
second acoustic characteristic of the playback zone either
individually or in combination. In some instances, a combination of
the first acoustic characteristic of the playback zone and the
second acoustic characteristic of the playback zone may represent a
more comprehensive acoustic characteristic of the playback zone
than either the first or second acoustic characteristic of the
playback zone individually. The respective audio processing
algorithms may then be transmitted to the playback device 804 and
the playback device 806 to apply when playing audio content in the
playback zone 810. Other examples are also possible.
[0174] While the discussions above generally refer to the method
900 as being performed by the computing device 802 of FIG. 8, one
having ordinary skill in the art will appreciate that, as indicated
above, the functions of method 900 may alternatively be performed
by one or more other computing devices, such as one or more
servers, one or more playback devices, and/or one or more
controller devices. For instance, the functions of method 900 to
calibrate the playback device 804 for the playback zone 810 may be
performed by the playback device 804, the playback device 806, the
controller device 808, or another device in communication with the
playback device 804, but not shown in FIG. 8.
[0175] Further, in some cases, one or more of the blocks 902-908
may be performed by the computing device 802, while one or more
others of the blocks 902-908 may be performed by one or more other
devices. For instance, blocks 902 and 904 may be performed by one
or more of the playback device 804, the playback device 806, and
the playback device 808. In other words, a coordinating device
other than the computing device 802 may coordinate calibration of
the playback device 804 for the playback zone 810.
[0176] In some cases, at block 906, the coordinating device may
transmit the second audio signal to the computing device 802 such
that the computing device 802 may determine the audio processing
algorithm based on the second audio signal and the acoustic
characteristic of the playback device. The acoustic characteristic
of the playback device may be provided to the computing device 802
by the coordinating device, or provided from another device on
which characteristics of the playback device is stored. In one
case, the computing device 802 may perform the calculations of
block 906 because the computing device 802 has more processing
power than the coordinating device..
[0177] In one example, upon determining the audio processing
algorithm, the computing device 802 may transmit the determined
audio processing algorithm directly to the playback device 804 for
the playback device 804 to apply when playing audio content in the
playback zone 810. In another example, upon determining the audio
processing algorithm, the computing device 802 may transmit the
determined audio processing algorithm to the coordinating device,
and the coordinating device may perform block 908 and transmit the
determined processing algorithm to the playback device 804 (if the
coordinating device is not also the playback device 804). Other
examples are also possible.
b. Second Example Methods for Determining an Audio Processing
Algorithm Based on a Detected Audio Signal
[0178] In some cases, as described above, calibration of a playback
device in a playback zone may be coordinated and performed by a
computing device such as a server, or a controller device. In some
other cases, as also described above, calibration of a playback
device may be coordinated and/or performed by the playback device
being calibrated.
[0179] FIG. 10 shows an example flow diagram of a method 1000 for
determining an audio processing algorithm based on one or more
playback zone characteristics, as performed by the playback device
being calibrated. Method 1000 shown in FIG. 10 presents an
embodiment of a method that can be implemented within an operating
environment involving, for example, the media playback system 100
of FIG. 1, one or more of the playback device 200 of FIG. 2, one or
more of the control device 300 of FIG. 3, and the playback
environment 800 of FIG. 8. As indicated, method 800 may be
performed by the playback device to be calibrated for a playback
zone. In some cases, some of the functions of method 1000 may
alternatively be performed by one or more other computing devices,
such as one or more servers, one or more other playback devices,
and/or one or more controller devices.
[0180] Method 1000 may include one or more operations, functions,
or actions as illustrated by one or more of blocks 1002-1008.
Although the blocks are illustrated in sequential order, these
blocks may also be performed in parallel, and/or in a different
order than those described herein. Also, the various blocks may be
combined into fewer blocks, divided into additional blocks, and/or
removed based upon the desired implementation.
[0181] As shown in FIG. 10, the method 1000 involves while in a
playback zone, playing a first audio signal at block 1002,
detecting by a microphone, a second audio signal at block 1004,
based on the second audio signal and an acoustic characteristic of
the playback device, determining an audio processing algorithm at
block 1006, and applying the determined audio processing algorithm
to audio data corresponding to a media item when playing the media
item at block 1008.
[0182] At block 1002, the method 1000 involves while in a playback
zone, playing a first audio signal. Referring to FIG. 8, the
playback device performing method 1000 may be the playback device
804, while the playback device 804 is in the playback zone 810. In
one example, block 1002 may be similar to block 902, but performed
by the playback device 804 being calibrated rather than the
computing device 802. Nevertheless, any discussions above in
connection to block 902 may also be applicable, sometimes with some
variation, to block 1002.
[0183] At block 1004, the method 1000 involves detecting by a
microphone, a second audio signal. The second audio signal may
include a portion corresponding to the first audio signal played by
the playback device. In one example, block 1004 may be similar to
block 904, but performed by the playback device 804 being
calibrated rather than the computing device 802. Nevertheless, any
discussions above in connection to block 904 may also be
applicable, sometimes with some variation, to block 1004.
[0184] At block 1006, the method 1000 involves based on the second
audio signal and an acoustic characteristic of the playback device,
determining an audio processing algorithm. In one example, block
1006 may be similar to block 906, but performed by the playback
device 804 being calibrated rather than the computing device 802.
Nevertheless, any discussions above in connection to block 906 may
also be applicable, sometimes with some variation, to block
1006.
[0185] In one case, functions for determining the audio processing
algorithm, as discussed in connection to block 906, may be
performed wholly by the playback device 804 that is being
calibrated for the playback zone 810. As such, the playback device
804 may determine an acoustic characteristic of the playback zone
610 based on the second audio signal and an acoustic characteristic
of the playback device 804. In one case, the playback device 804
may have stored locally, the acoustic characteristic of the
playback device 804. In another case, the playback device 804 may
receive from another device, the acoustic characteristic of the
playback device 804.
[0186] In one example, the playback device 804 may then select from
a plurality of audio processing algorithms, an audio processing
algorithm corresponding to the acoustic characteristic of the
playback zone 610. For instance, the playback device 804 may access
a database such the databases 600 and 650 shown in and described
above in connection to FIG. 6A and 6B, respectively, and identify
in the database an audio processing algorithm corresponding to an
acoustic characteristic substantially similar to the acoustic
characteristic of the playback zone 610.
[0187] In another example, similar to functions described above in
connection to block 906 of the method 900 and/or block 508 of the
method 500, the playback device 804 may calculate the audio
processing algorithm based on the acoustic characteristics of the
playback zone 610 and a predetermined audio characteristic, such
that an application of the determined audio processing algorithm by
the playback device 804 when playing the first audio signal in the
playback zone 810 may produce a third audio signal having an audio
characteristic substantially the same as the predetermined audio
characteristic, or assumes the predetermined audio characteristic,
at least to some extent.
[0188] In a further example, as discussed in previous section,
another device other than the playback device 804 may perform some
or all of the functions of block 1006. For instance, the playback
device 804 may transmit data indicating the detected second audio
signal to a computing device, such as the computing device 802,
another playback device such as the playback device 806, a
controller device such as the controller device 808, and/or some
other device in communication with the playback device 804, and
request an audio processing algorithm. In another instance, the
playback device 804 may determine the acoustic characteristic of
the playback zone 810 based on the detected audio signal, and
transmit data indicating the determined acoustic characteristic of
the playback zone 810 to the other device with a request for an
audio processing algorithm based on the determined acoustic
characteristic of the playback zone 810.
[0189] In other words, in one aspect, the playback device 804 may
determine the audio processing algorithm by requesting from the
other device, an audio processing algorithm based on the detected
second audio signal and/or acoustic characteristic of the playback
zone 810 provided to the other device by the playback device
804
[0190] In a case where the playback device 804 provides data
indicating the detected second audio signal but not the acoustic
characteristic of the playback zone 810, the playback device 804
may also transmit the acoustic characteristic of the playback
device 804 along with the data indicating the detected second audio
signal such that the other device may determine the acoustic
characteristic of the playback zone 810. In another case, the
device receiving the data indicating the detected second audio
signal may determine based on the data, a model of the playback
device 804 transmitting the data, and determine an acoustic
characteristic of the playback device 804 based on the model of the
playback device 804 (i.e. a playback device acoustic characteristic
database). Other examples are also possible.
[0191] The playback device 804 may then receive the determined
audio processing algorithm. In one case, the playback device 804
may send the second audio signal to the other device because the
other device has more processing power than the playback device
804. In another case, the playback device 804 and one or more other
devices may perform the calculations and functions in parallel for
an efficient use of processing power. Other examples are also
possible.
[0192] At block 1008, the method 800 involves applying the
determined audio processing algorithm to audio data corresponding
to a media item when playing the media item. In one example,
application of the audio processing algorithm to the audio data of
the media item by the playback device 804 when playing the media
item in the playback zone 810 may modify a frequency equalization
of the media item. In another example, application of the audio
processing algorithm to the audio data of the media item by the
playback device 804 when playing the media item in the playback
zone 810 may modify a volume range of the media item. In one
example, the playback device 804 may store in local memory storage,
the determined audio processing algorithm and apply the audio
processing algorithm when playing audio content in the playback
zone 810.
[0193] In one example, the playback device 804 may be calibrated
for different configurations of the playback device 804. For
instance, the playback device 804 may be calibrated for a first
configuration involving individual playback in the playback zone
810, as well as for a second configuration involving synchronous
playback with the playback device 806 in the playback zone 810. In
such a case, a first audio processing algorithm determined, stored,
and applied for the first playback configuration of the playback
device, and a second audio processing algorithm determined, stored,
and applied for the second playback configuration of the playback
device.
[0194] The playback device 804 may then determine based on a
playback configuration the playback device 804 is in at a given
time, which audio processing algorithm to apply when playing audio
content in the playback zone 810. For instance, if the playback
device 804 is playing audio content in the playback zone 810
without the playback device 806, the playback device 804 may apply
the first audio processing algorithm. On the other hand, if the
playback device 804 is playing audio content in the playback zone
810 in synchrony with the playback device 806, the playback device
804 may apply the second audio processing algorithm. Other examples
are also possible.
c. Example Method for Determining an Audio Processing Algorithm
Based on Playback Zone Characteristics
[0195] In the discussions above, determination of an audio
processing algorithm may be generally based on determining an
acoustic characteristic of the playback zone, as determined based
on an audio signal detected by a playback device in the playback
zone. In some cases, an audio processing algorithm may also be
identified based on other characteristics of the playback zone, in
addition to or instead of the acoustic characteristic of the
playback zone.
[0196] FIG. 11 shows an example flow diagram for providing an audio
processing algorithm from a database of audio processing algorithms
based on one or more characteristics of the playback zone. Method
1100 shown in FIG. 11 presents an embodiment of a method that can
be implemented within an operating environment involving, for
example, the media playback system 100 of FIG. 1, one or more of
the playback device 200 of FIG. 2, one or more of the control
device 300 of FIG. 3, and the playback environment 800 of FIG. 8.
In one example, method 1100 may be performed, either individually
or collectively by one or more playback devices, one or more
controller devices, one or more servers, or one or more computing
devices in communication with the playback device to be calibrated
for the playback zone.
[0197] Method 1100 may include one or more operations, functions,
or actions as illustrated by one or more of blocks 1102-1108.
Although the blocks are illustrated in sequential order, these
blocks may also be performed in parallel, and/or in a different
order than those described herein. Also, the various blocks may be
combined into fewer blocks, divided into additional blocks, and/or
removed based upon the desired implementation.
[0198] As shown in FIG. 11, the method 1100 involves maintaining a
database of (i) a plurality of audio processing algorithms and (ii)
a plurality of playback zone characteristics at block 1102,
receiving data indicating one or more characteristics of a playback
zone at block 1104, based on the data, identifying in the database,
and audio processing algorithm at block 1106, and transmitting data
indicating the identified audio processing algorithm at block
1108.
[0199] At block 1102, the method 1100 involves maintaining a
database of (i) a plurality of audio processing algorithms and (ii)
a plurality of playback zone characteristics. In one example, the
database may be similar to the databases 600 and 650 as shown in
and described above in connection to FIGS. 6A and 6B, respectively.
As such, each audio processing algorithm of the plurality of audio
processing algorithms may correspond to one or more playback zone
characteristics of the plurality of playback zone characteristics.
Maintenance of the database may be as described above in connection
to the methods 500 and 700 of FIGS. 5 and 7, respectively. As
discussed above, the database may or may not be stored locally on
the device maintaining the database.
[0200] At block 1104, the method 1100 involves receiving data
indicating one or more characteristics of a playback zone. In one
example, the one or more characteristics of the playback zone may
include an acoustic characteristic of the playback zone. In another
example, the one or more characteristics of the playback zone may
include a dimension of the playback zone, a flooring material of
the playback zone, a wall material of the playback zone, an
intended use of the playback zone, an number of furniture in the
playback zone, a size of furniture in the playback zone, and types
of furniture in the playback zone, among other possibilities.
[0201] In one example, referring again to FIG. 8, playback device
804 may be calibrated for the playback zone 810. As indicated
above, method 1100 may be performed, either individually or
collectively by the playback device 804 being calibrated, the
playback device 806, the controller device 808, the computing
device 802, or another device in communication with the playback
device 804.
[0202] In one case, the one or more characteristics may include an
acoustic characteristic of the playback zone 810. In such a case,
the playback device 804 in the playback zone 810 may play a first
audio signal and detect by a microphone of the playback device 804,
a second audio signal that includes a portion corresponding to the
first audio signal. In one instance, the data indicating the one or
more characteristics may be data indicating the detected second
audio signal. In another instance, based on the detected second
audio signal and an acoustic characteristic of the playback device
804, the acoustic characteristic of the playback zone 810 may be
determined, similar to that discussed previously. The data
indicating the one or more characteristics may then indicate the
acoustic characteristic of the playback zone. In either instance,
data indicating the one or more characteristics may then be
received by at least one of the one or more devices performing the
method 1100.
[0203] In another case, the one or more characteristics may include
a dimension of the playback zone, a flooring material of the
playback zone, and a wall material of the playback zone etc. In
such a case, a user may be prompted via a controller interface
provided by a controller device such as the controller device 808,
to enter or select one or more characteristics of the playback zone
810. For instance, the controller interface may provide a list of
playback zone dimensions, and/or a list of furniture arrangements,
among other possibilities for the user to select from. The data
indicating the one or more characteristics of the playback zone
810, as provided by the user, may then be received by at least one
of the one or more devices performing the method 1100.
[0204] At block 1106, the method 1100 involves based on the data,
identifying in the database, an audio processing algorithm.
Referring to the case where the one or more characteristics include
the acoustic characteristic of the playback zone 810, an audio
processing algorithm may be identified in the database based on the
acoustic characteristics of the playback zone 810. For instance,
referring to the database 600 of FIG. 6A, if the received data
indicates an acoustic characteristic of the playback zone 810 as
h.sub.room.sup.-1(t)-3, or substantially the same as
h.sub.room.sup.-1(t)-3, then the audio processing algorithm of
database entry 606 having coefficients w.sub.3, x.sub.3, y.sub.3,
and z.sub.3 may be identified. In the instance the data indicating
the one or more characteristics of the playback zone simply
includes data indicating the detected second audio signal, the
acoustic characteristic of the playback zone may further be
determined as described previously, prior to identifying the audio
processing algorithm. Other examples are also possible.
[0205] Referring to a case where the one or more characteristics
include dimensions of the playback zone, among other
characteristics, an audio processing algorithm may be identified in
the database based on the dimensions of the playback zone. For
instance, referring to the database 650 of FIG. 6B, if the received
data indicates dimensions of the playback zone 810 as
a.sub.4.times.b.sub.4.times.c.sub.4, or substantially the same as
a.sub.4.times.b.sub.4.times.c.sub.4, then the audio processing
algorithm of database entry 658 having coefficients w.sub.4,
x.sub.4, y.sub.4, and z.sub.4 may be identified. Other examples are
also possible.
[0206] In some cases, more than one audio processing algorithm may
be identified based on the one or more characteristics of the
playback zone indicated in the received data. For instance, the
acoustic characteristic of the playback zone 810 may be determined
as h.sub.room.sup.-1(t)-3, which corresponds to audio processing
algorithm parameters w.sub.3, x.sub.3, y.sub.3, and z.sub.3, as
provided in entry 656 of the database 650 of FIG. 6, while the
dimensions provided by the user for the playback zone 810 may be
a.sub.4.times.b.sub.4.times.c.sub.4, which corresponds to audio
processing algorithm parameters w.sub.4, x.sub.4, y.sub.4, and
z.sub.4, as provided in entry 658.
[0207] In one example, the audio processing algorithm corresponding
to a matching or substantially matching acoustic characteristic may
be prioritized. In another example, an average of the audio
processing algorithms (i.e. an averaging of the parameters) may be
calculated, and the average audio processing algorithm may be the
identified audio processing algorithm. Other examples are also
possible.
[0208] At block 1108, the method 1100 involves transmitting data
indicating the identified audio processing algorithm. Continuing
with the examples above, the data indicating the identified audio
processing algorithm may be transmitted to the playback device 804
being calibrated for the playback zone 810. In one case, the data
indicating the identified audio processing algorithm may be
transmitted directly to the playback device 804. In another case,
such as if the calibration of the playback device 804 is
coordinated by the controller device 808, and if the audio
processing algorithm was identified by the computing device 802,
the data indicating the identified audio processing algorithm may
be transmitted to the playback device 804 from the computing device
802 via the controller device 808. Other examples are also
possible.
[0209] As indicated above, the functions of method 1100 may be
performed by one or more of one or more servers, one or more
playback devices, and/or one or more controller devices. In one
example, maintenance of the database at block 1102 may be performed
by the computing device 802, and receiving of data indicating the
one or more characteristics of the playback zone at block 1104 may
be performed by the controller device 808 (the data may be provided
to the controller device 808 by the playback device 804 being
calibrated in the playback zone 810). Block 1106 may be performed
by the controller device 808 communicating with the computing
device 802 to access the database maintained by the computing
device 802 to identify the audio signal processing, and block 1108
may involve the computing device 802 transmitting the data
indicating the identified audio processing algorithm to the
playback device 804 either directly or via the controller device
808.
[0210] In another example, the functions of method 1100 may be
performed wholly or substantially wholly by one device. For
instance, the computing device 802 may maintain the database as
discussed in connection to block 1102.
[0211] The computing device 802 may then coordinate calibration of
the playback device 804. For instance, the computing device 802 may
cause the playback device 804 to play a first audio signal and
detect a second audio signal, receive from the playback device 804
data indicating the detected second audio signal, and determine an
acoustic characteristic of the playback zone 810 based on the data
from the playback device 804. In another instance, the computing
device 802 may cause the controller device 808 to prompt a user to
provide one or more characteristics of the playback zone 810 (i.e.
dimensions etc., as discussed above) and receive data indicating
the user-provided characteristics of the playback zone 810.
[0212] The computing device may then, at block 1106 identify an
audio processing algorithm based on the received data, and at block
1108, transmit data indicating the identified audio processing
algorithm to the playback device 804. The computing device 802 may
also transmit a command for the playback device 804 to apply the
identified audio processing algorithm when playing audio content in
the playback zone 810. Other examples are also possible.
V. Conclusion
[0213] The description above discloses, among other things, various
example systems, methods, apparatus, and articles of manufacture
including, among other components, firmware and/or software
executed on hardware. It is understood that such examples are
merely illustrative and should not be considered as limiting. For
example, it is contemplated that any or all of the firmware,
hardware, and/or software aspects or components can be embodied
exclusively in hardware, exclusively in software, exclusively in
firmware, or in any combination of hardware, software, and/or
firmware. Accordingly, the examples provided are not the only
way(s) to implement such systems, methods, apparatus, and/or
articles of manufacture.
[0214] Additionally, references herein to "embodiment" means that a
particular feature, structure, or characteristic described in
connection with the embodiment can be included in at least one
example embodiment of an invention. The appearances of this phrase
in various places in the specification are not necessarily all
referring to the same embodiment, nor are separate or alternative
embodiments mutually exclusive of other embodiments. As such, the
embodiments described herein, explicitly and implicitly understood
by one skilled in the art, can be combined with other
embodiments.
[0215] The specification is presented largely in terms of
illustrative environments, systems, procedures, steps, logic
blocks, processing, and other symbolic representations that
directly or indirectly resemble the operations of data processing
devices coupled to networks. These process descriptions and
representations are typically used by those skilled in the art to
most effectively convey the substance of their work to others
skilled in the art. Numerous specific details are set forth to
provide a thorough understanding of the present disclosure.
However, it is understood to those skilled in the art that certain
embodiments of the present disclosure can be practiced without
certain, specific details. In other instances, well known methods,
procedures, components, and circuitry have not been described in
detail to avoid unnecessarily obscuring aspects of the embodiments.
Accordingly, the scope of the present disclosure is defined by the
appended claims rather than the forgoing description of
embodiments.
[0216] When any of the appended claims are read to cover a purely
software and/or firmware implementation, at least one of the
elements in at least one example is hereby expressly defined to
include a tangible, non-transitory medium such as a memory, DVD,
CD, Blu-ray, and so on, storing the software and/or firmware.
* * * * *