Content reproducing apparatus, audio reproducing apparatus and content reproducing method

Sako , et al. July 18, 2

Patent Grant RE46481

U.S. patent number RE46,481 [Application Number 14/539,887] was granted by the patent office on 2017-07-18 for content reproducing apparatus, audio reproducing apparatus and content reproducing method. This patent grant is currently assigned to Sony Corporation. The grantee listed for this patent is Sony Corporation. Invention is credited to Makoto Inoue, Kenichi Makino, Yoichiro Sako, Akane Sano, Katsuya Shirai, Motoyuki Takai.


United States Patent RE46,481
Sako ,   et al. July 18, 2017

Content reproducing apparatus, audio reproducing apparatus and content reproducing method

Abstract

A content reproducing apparatus is disclosed which includes: a sensor; a discrimination circuit configured to discriminate whether a movement of a user is a first movement or a second movement based on a detection output from the sensor; a storage configured to store contents; a reproduction circuit configured to reproduce the contents; and a control circuit configured to supply the reproduction circuit with contents retrieved from the storage in accordance with a discrimination output from the discrimination circuit.


Inventors: Sako; Yoichiro (Tokyo, JP), Makino; Kenichi (Kanagawa, JP), Sano; Akane (Tokyo, JP), Shirai; Katsuya (Kanagawa, JP), Takai; Motoyuki (Tokyo, JP), Inoue; Makoto (Tokyo, JP)
Applicant:
Name City State Country Type

Sony Corporation

Tokyo

N/A

JP
Assignee: Sony Corporation (Tokyo, JP)
Family ID: 38080946
Appl. No.: 14/539,887
Filed: November 12, 2014

Related U.S. Patent Documents

Application Number Filing Date Patent Number Issue Date
Reissue of: 11702483 Feb 5, 2007 8311654 Nov 13, 2012

Foreign Application Priority Data

Feb 17, 2006 [JP] 2006-040052
Current U.S. Class: 1/1
Current CPC Class: G11B 27/105 (20130101); G11B 27/329 (20130101); G11B 27/105 (20130101); G11B 27/329 (20130101)
Current International Class: G06F 17/00 (20060101); G11B 27/10 (20060101); G11B 27/32 (20060101)
Field of Search: ;340/853.2 ;386/E5.002 ;482/1,4 ;434/236 ;600/300 ;700/94 ;707/723

References Cited [Referenced By]

U.S. Patent Documents
4776323 October 1988 Spector
5002491 March 1991 Abrahamson et al.
5119474 June 1992 Beitel et al.
5137501 August 1992 Mertesdorf
5215468 June 1993 Lauffer et al.
5648627 July 1997 Usa
6142913 November 2000 Ewert
6157744 December 2000 Nagasaka et al.
6230192 May 2001 Roberts et al.
6312363 November 2001 Watterson et al.
6336891 January 2002 Fedrigon et al.
6349275 February 2002 Schumacher et al.
6389222 May 2002 Ando et al.
6390923 May 2002 Yoshitomi et al.
6408128 June 2002 Abecassis
6570078 May 2003 Ludwig
6571193 May 2003 Unuma et al.
6578047 June 2003 Deguchi
6623427 September 2003 Mandigo
6662231 December 2003 Drosset et al.
6697824 February 2004 Bowman-Amuah
6704729 March 2004 Klein et al.
6757482 June 2004 Ochiai et al.
6807558 October 2004 Hassett et al.
6813438 November 2004 Bates et al.
6839680 January 2005 Liu et al.
6868440 March 2005 Gupta et al.
6944542 September 2005 Eschenbach
6944621 September 2005 Collart
7161887 January 2007 Snow et al.
7260402 August 2007 Ahmed
7293066 November 2007 Day
7320137 January 2008 Novak et al.
7346920 March 2008 Lamkin et al.
7395549 July 2008 Perlman et al.
7451177 November 2008 Johnson et al.
7464137 December 2008 Zhu et al.
7521623 April 2009 Bowen
7521624 April 2009 Asukai et al.
7542816 June 2009 Rosenberg
7546626 June 2009 Ohnuma
7790976 September 2010 Takai et al.
7894424 February 2011 Sako et al.
7930385 April 2011 Takai et al.
8010489 August 2011 Takai et al.
8027965 September 2011 Takehara et al.
8079962 December 2011 Takai et al.
8135700 March 2012 Takehara et al.
8135736 March 2012 Takehara et al.
8170003 May 2012 Sasaki et al.
8311654 November 2012 Sako et al.
8451832 May 2013 Takai et al.
8608621 December 2013 Ogg et al.
8837469 September 2014 Sako et al.
9230029 January 2016 Morse et al.
2001/0010754 August 2001 Ando et al.
2001/0014620 August 2001 Nobe et al.
2001/0015123 August 2001 Nishitani et al.
2001/0043198 November 2001 Ludtke
2001/0055038 December 2001 Kim
2002/0056142 May 2002 Redmond
2002/0073417 June 2002 Kondo et al.
2002/0085833 July 2002 Miyauchi
2002/0104101 August 2002 Yamato et al.
2002/0152122 October 2002 Chino et al.
2003/0007777 January 2003 Okajima et al.
2003/0018622 January 2003 Chau
2003/0026433 February 2003 Matt
2003/0034996 February 2003 Li et al.
2003/0060728 March 2003 Mandigo
2003/0065665 April 2003 Kinjo
2003/0069893 April 2003 Kanai et al.
2003/0088647 May 2003 ShamRao
2003/0093790 May 2003 Logan et al.
2003/0113096 June 2003 Taira et al.
2003/0126604 July 2003 Suh
2003/0163693 August 2003 Medvinsky
2003/0212810 November 2003 Tsusaka et al.
2004/0000225 January 2004 Nishitani et al.
2004/0044724 March 2004 Bell et al.
2004/0049405 March 2004 Buerger et al.
2004/0064209 April 2004 Zhang
2004/0126038 July 2004 Aublant et al.
2004/0220830 November 2004 Moreton et al.
2004/0252397 December 2004 Hodge et al.
2004/0255335 December 2004 Fickle et al.
2004/0259529 December 2004 Suzuki
2005/0041951 February 2005 Inoue et al.
2005/0102365 May 2005 Moore et al.
2005/0126370 June 2005 Takai et al.
2005/0241465 November 2005 Goto
2005/0249080 November 2005 Foote
2005/0278758 December 2005 Bodlaender
2005/0288991 December 2005 Hubbard et al.
2006/0078297 April 2006 Nishikawa et al.
2006/0087925 April 2006 Takai et al.
2006/0107822 May 2006 Bowen
2006/0112411 May 2006 Takai et al.
2006/0174291 August 2006 Takai et al.
2006/0189902 August 2006 Takai et al.
2006/0190413 August 2006 Harper
2006/0220882 October 2006 Makino
2006/0243120 November 2006 Takai et al.
2006/0245599 November 2006 Regnier
2006/0250994 November 2006 Sasaki et al.
2007/0005655 January 2007 Takehara et al.
2007/0025194 February 2007 Morse et al.
2007/0044010 February 2007 Sull et al.
2007/0067311 March 2007 Takai et al.
2007/0074253 March 2007 Takai et al.
2007/0074619 April 2007 Vergo
2007/0085759 April 2007 Lee et al.
2007/0098354 May 2007 Ando et al.
2007/0186752 August 2007 Georges et al.
2007/0204744 September 2007 Sako et al.
2007/0221045 September 2007 Terauchi et al.
2007/0237136 October 2007 Sako et al.
2007/0265720 November 2007 Sako et al.
2008/0153671 June 2008 Ogg et al.
2008/0263020 October 2008 Takehara et al.
2009/0028009 January 2009 Johnson et al.
2011/0016149 January 2011 Sako et al.
2011/0252053 October 2011 Takehara et al.
2014/0344407 November 2014 Sako et al.
2015/0066983 March 2015 Sako et al.
Foreign Patent Documents
1 039 400 Sep 2000 EP
1 128 358 Aug 2001 EP
1 160 651 Dec 2001 EP
1 320 101 Jun 2003 EP
1 503 376 Feb 2005 EP
1 705 588 Sep 2006 EP
1 705 588 Sep 2006 EP
1 729 290 Dec 2006 EP
1 729 290 Dec 2006 EP
1 746 520 Jan 2007 EP
04-044096 Feb 1992 JP
05-273971 Oct 1993 JP
06-249977 Sep 1994 JP
06-290574 Oct 1994 JP
07-064547 Mar 1995 JP
07-110681 Apr 1995 JP
08-131425 May 1996 JP
08-152880 Jun 1996 JP
08-286663 Nov 1996 JP
08-322014 Dec 1996 JP
08-328555 Dec 1996 JP
09-107517 Apr 1997 JP
10-055174 Feb 1998 JP
10-124047 May 1998 JP
10-254445 Sep 1998 JP
11-126067 May 1999 JP
2000-003174 Jan 2000 JP
2000-020054 Jan 2000 JP
2000-207263 Jul 2000 JP
3088409 Jul 2000 JP
2000-214851 Aug 2000 JP
3088409 Sep 2000 JP
2000-285059 Oct 2000 JP
2001-022350 Jan 2001 JP
3147888 Jan 2001 JP
2001-075995 Mar 2001 JP
3147888 Mar 2001 JP
2001-166772 Jun 2001 JP
2001-282813 Oct 2001 JP
2001-297090 Oct 2001 JP
2001-299980 Oct 2001 JP
2001-299980 Oct 2001 JP
P2001-299980 Oct 2001 JP
2001-321564 Nov 2001 JP
2001-324984 Nov 2001 JP
2001-325787 Nov 2001 JP
2001-357008 Dec 2001 JP
2001-359096 Dec 2001 JP
2002-023746 Jan 2002 JP
2002-049631 Feb 2002 JP
2002-092013 Mar 2002 JP
2002-108918 Apr 2002 JP
2002-189663 Jul 2002 JP
2002-238022 Aug 2002 JP
2002-251185 Sep 2002 JP
2002-282227 Oct 2002 JP
2002-330411 Nov 2002 JP
2003-023589 Jan 2003 JP
2003-037856 Feb 2003 JP
2003-050816 Feb 2003 JP
2003-058770 Feb 2003 JP
2003-085888 Mar 2003 JP
2003-108154 Apr 2003 JP
2003-150173 May 2003 JP
2003-157375 May 2003 JP
2003-162285 Jun 2003 JP
2003-177749 Jun 2003 JP
2003-177749 Jun 2003 JP
2003-224677 Aug 2003 JP
2004-073272 Mar 2004 JP
2004-078467 Mar 2004 JP
P2004-113552 Apr 2004 JP
2004-139576 May 2004 JP
2004-151855 May 2004 JP
2004-173102 Jun 2004 JP
2004-185535 Jul 2004 JP
2004-199667 Jul 2004 JP
2004-222239 Aug 2004 JP
2004-226625 Aug 2004 JP
2004-234807 Aug 2004 JP
2004-240252 Aug 2004 JP
2004-526372 Aug 2004 JP
2004-526372 Aug 2004 JP
2004-252654 Sep 2004 JP
2004-259313 Sep 2004 JP
2004-259430 Sep 2004 JP
3598613 Sep 2004 JP
2004-282775 Oct 2004 JP
2004-317819 Nov 2004 JP
2004-326840 Nov 2004 JP
2004-361713 Dec 2004 JP
2004-362145 Dec 2004 JP
2004-362489 Dec 2004 JP
2004-362601 Dec 2004 JP
3598613 Dec 2004 JP
2005-004604 Jan 2005 JP
2005-043916 Feb 2005 JP
2005-062971 Mar 2005 JP
2005-084336 Mar 2005 JP
2005-093068 Apr 2005 JP
2005-107867 Apr 2005 JP
2005-156641 Jun 2005 JP
2005-156641 Jun 2005 JP
2005-196918 Jul 2005 JP
2005-202319 Jul 2005 JP
2007-149218 Jun 2007 JP
1023191 Oct 2004 NL
WO 93/22762 Nov 1993 WO
WO 01/82302 Nov 2001 WO
WO 02/05124 Jan 2002 WO
WO 02/080524 Oct 2002 WO
WO 02/093344 Nov 2002 WO
WO 03/043007 May 2003 WO
WO 2004/023358 Mar 2004 WO
WO 2004/077706 Sep 2004 WO
WO 2004/077760 Sep 2004 WO

Other References

Chang et al., Overview of the MPeg-7 standard. IEEE Transactions on Circuits and Systems for Video Technology. Jun. 2001; 11(6):688-695. cited by applicant .
Hawley, Structure out of sound. MIT PhD Thesis. 1993, pp. 1-185. cited by applicant .
Koike et al., Timeslider: An Interface to specify time point. Proc. of the ACM 10.sup.th Annual Symposium on User Interface Software and Technology, Oct. 17, 1997, pp. 43-44, Alberta, Canada. cited by applicant .
Little et al., "A digital on-demand video service supporting content-based queries." http://portal.acm.org/ft.sub.--gateway.cfm?id=168450&type=pdf&c- oll=GUIDE&d1=GUIDE&CFID=16387603&CFTOKEN=17953305. Proc. of the First ACM International Conference on Multimedia. New York, Aug. 1, 1993, 9 pages, XP-002429294. cited by applicant .
McParland et al., Exchanging TV-anytime metadata over IP networks. Document AN462 submitted to the TV-anytime forum, Sep. 17, 2002, pp. 1-38. cited by applicant .
O'Keeffe, Karl, Dancing monkeys. Masters project. Jun. 18, 2003, pp. 1-66. cited by applicant .
Zhang et al., Database and metadata support of a web-based multimedia digital library for Medical Education, http://www.springerlink.com/content/69Ohglrxv19gwy2q/fulltext.pdf. Proc. of the First International Conference on Advances in Web-based Learning, ICWL 2002. China, Aug. 17, 2002, pp. 339-350, XP002429295. cited by applicant.

Primary Examiner: Lee; Christopher E.
Attorney, Agent or Firm: Wolf, Greenfield & Sacks, P.C.

Claims



What is claimed is:

1. A content reproducing apparatus comprising: a sensor; a discrimination circuit configured to identify, based on a detection output from said sensor, an estimated .[.movement type of a.]. user's movement from a plurality of .[.movement types.]. .Iadd.movements .Iaddend.having different degrees of intensity; a storage configured to store contents; a reproduction circuit configured to reproduce said contents; an analysis circuit configured to use the estimated .Iadd.user's .Iaddend.movement .[.type.]. identified by the discrimination circuit to change analysis algorithms for analyzing the detection output from the sensor to determine a tempo of the user's movement; and a control circuit configured to supply said reproduction circuit with contents retrieved from said storage in accordance with the tempo of the user's movement.

2. The content reproducing apparatus according to claim 1, wherein, in accordance with said tempo determined by said analysis circuit, said control circuit selects a predetermined play list from a plurality of play lists derived from said contents classified by a predetermined tempo, and retrieves applicable contents from said storage in accordance with the selected play list.

3. The content reproducing apparatus according to claim 1, wherein, in accordance with the estimated .Iadd.user's .Iaddend.movement .[.type.]. identified by said discrimination circuit, said control circuit selects a predetermined play list from a plurality of play lists derived from said contents classified by a predetermined tempo, and retrieves applicable contents from said storage in accordance with the selected play list.

4. The content reproducing apparatus according to claim 1, wherein the plurality of .[.movement types.]. .Iadd.movements .Iaddend.comprises at least one movement .[.type.]. of walking and at least one movement .[.type.]. of running.

5. The content reproducing apparatus according to claim 1, wherein said discrimination circuit includes at least a period detection circuit configured to discriminate whether said movement of said user is walking or running based on periodicity of peaks in a waveform derived from said detection output from said sensor.

6. The content reproducing apparatus according to claim 1, wherein said discrimination circuit includes at least an amplitude detection circuit configured to discriminate whether said movement of said user is walking or running based on amplitude of peaks in a waveform derived from said detection output from said sensor.

7. The content reproducing apparatus according to claim 1, wherein said discrimination circuit includes at least an autocorrelation circuit configured to discriminate whether said movement of said user is walking or running based on autocorrelation calculations of said detection output from said sensor.

8. The content reproducing apparatus according to claim 1, wherein said discrimination circuit includes: a plurality of detection circuits each configured to detect whether said movement of said user is walking or running by use of one of algorithms different from one another on the basis of said detection output from said sensor; and a determination circuit configured to determine whether said movement of said user is walking or running by evaluating detection outputs from said plurality of detection circuits and output the result of the determination as said discrimination output.

9. A content reproducing method comprising acts of: identifying, based on a detection output from a sensor, an estimated .[.movement type of a.]. user's movement from a plurality of .[.movement types.]. .Iadd.movements .Iaddend.having different degrees of intensity; using the estimated .[.movement type of the.]. user's movement to analyze a tempo of the user's movement, comprising changing analysis algorithms for analyzing the tempo of said movement of said user based on said estimated .Iadd.user's .Iaddend.movement .[.type.].; and supplying a reproduction circuit with contents retrieved from a storage storing said contents in accordance with the tempo of the user's movement.

10. The content reproducing method according to claim 9, wherein the act of supplying comprises selecting, in accordance with results from said act of using the estimated .[.movement type of the.]. user's movement to analyze the tempo of the user's movement, a predetermined play list from a plurality of play lists derived from said contents classified by a predetermined tempo, and retrieving applicable contents from said storage in accordance with the selected play list.

11. The content reproducing method according to claim 9, wherein the act of supplying comprises selecting, in accordance with said estimated .[.movement type of the.]. user's movement a predetermined play list from a plurality of play lists derived from said contents, and retrieving applicable contents from said storage in accordance with the selected play list.

12. The content reproducing method according to claim 9, wherein the plurality of .[.movement types.]. .Iadd.movements .Iaddend.comprises at least one movement .[.type.]. of walking and at least one movement .[.type.]. of running.

13. The content reproducing method according to claim 9, wherein said act of identifying includes at least discriminating whether said movement of said user is walking or running based on periodicity of peaks in a waveform derived from said detection output from said sensor.

14. The content reproducing method according to claim 9, wherein said act of identifying includes at least discriminating whether said movement of said user is walking or running based on amplitude of peaks in a waveform derived from said detection output from said sensor.

15. The content reproducing method according to claim 9, wherein said act of identifying includes at least discriminating whether said movement of said user is walking or running based on autocorrelation calculations of said detection output from said sensor.

16. The content reproducing method according to claim 9, wherein said act of identifying includes acts of: using a plurality of algorithms to detect whether said movement of said user is walking or running based at least in part on said detection output from said sensor; determining whether said movement of said user is walking or running by evaluating detection outputs from said plurality of algorithms; and outputting a result of the act of determining as said estimated .[.movement type of the.]. user's movement.

17. At least one storage medium which stores computer-readable instructions for causing a computer to execute a method comprising acts of: identifying, based on a detection output from a sensor, an estimated .[.movement type of a.]. user's movement from a plurality of .[.movement types.]. .Iadd.movements .Iaddend.having different degrees of intensity; using the estimated .[.movement type of the.]. user's movement to change analysis algorithms for analyzing a tempo of the user's movement; and supplying a reproduction circuit with contents retrieved from a storage storing said contents in accordance with the estimated .[.movement type of the.]. user's movement.

18. The at least one storage medium of claim 17, wherein the plurality of .[.movement types.]. .Iadd.movements .Iaddend.comprises at least one movement .[.type.]. of walking and at least one movement .[.type.]. of running.

.Iadd.19. A content reproducing apparatus comprising: a sensor; a discrimination circuit configured to identify, based on a detection output from said sensor, an estimated user's movement from a plurality of movements having different degrees of intensity; a reproduction circuit configured to reproduce contents; an analysis circuit configured to use the estimated user's movement identified by the discrimination circuit to change analysis algorithms for analyzing the detection output from the sensor to determine a tempo of the user's movement; and a control circuit configured to supply said reproduction circuit with contents in accordance with the tempo of the user's movement. .Iaddend.

.Iadd.20. An information processing apparatus comprising circuitry including at least one processor and at least one memory configured to: identify, based on a detection output from a sensor, an estimated user's movement from a plurality of movements having different degrees of intensity; use the estimated user's movement to change analysis algorithms for analyzing the detection output from the sensor to determine a tempo of the user's movement; and supply content in accordance with the tempo of the user's movement. .Iaddend.

.Iadd.21. The information processing apparatus according to claim 20, wherein the circuitry is further configured to reproduce the content. .Iaddend.

.Iadd.22. The information processing apparatus according to claim 21, further comprising a storage, wherein the circuitry is configured to supply the content at least in part by retrieving the content from the storage in accordance with the tempo of the user's movement. .Iaddend.

.Iadd.23. The information processing apparatus according to claim 20, further comprising the sensor, wherein the circuitry is further configured to receive the detection output from the sensor. .Iaddend.

.Iadd.24. The information processing apparatus according to claim 20, wherein: the information processing apparatus comprises a first device comprising the circuitry; the sensor is separate from the first device; and the circuitry is further configured to receive the detection output from the sensor via wireless communication. .Iaddend.

.Iadd.25. The information processing apparatus according to claim 24, further comprising a second device comprising the sensor, wherein the second device is a wearable device attached to the user. .Iaddend.

.Iadd.26. The information processing apparatus according to claim 25, wherein the second device is a headphone. .Iaddend.

.Iadd.27. The information processing apparatus according to claim 20, wherein the plurality of movements comprises at least one movement of walking and at least one movement of running. .Iaddend.

.Iadd.28. The information processing apparatus according to claim 20, wherein the circuitry is configured to determine whether the user's movement is walking or running based on periodicity of peaks in the signal derived from the detection output. .Iaddend.

.Iadd.29. The information processing apparatus according to claim 28, wherein the signal is a waveform. .Iaddend.

.Iadd.30. The information processing apparatus according to claim 20, wherein the circuitry is configured to determine whether the user's movement is walking or running based on amplitude of peaks in the signal derived from the detection output. .Iaddend.

.Iadd.31. The information processing apparatus according to claim 30, wherein the signal is a waveform. .Iaddend.

.Iadd.32. The information processing apparatus according to claim 20, wherein the circuitry is configured to: select, based on the tempo of the user's movement, a playlist from a plurality of playlists, and select the content from the playlist. .Iaddend.

.Iadd.33. The information processing apparatus according to claim 20, wherein the circuitry is configured to use an algorithm based on a manner in which the sensor is attached to the user's body. .Iaddend.

.Iadd.34. The information processing apparatus according to claim 33, wherein the sensor is hung from the user's neck by a neck strap. .Iaddend.

.Iadd.35. The information processing apparatus according to claim 33, wherein the sensor is attached to a piece of clothing worn by the user. .Iaddend.

.Iadd.36. The information processing apparatus according to claim 33, wherein the sensor is held in a bag carried by the user. .Iaddend.

.Iadd.37. An information processing apparatus comprising circuitry including at least one processor and at least one memory configured to: identify, based on a detection output from a sensor, an estimated user's movement from a plurality of movements having different signals; and use the estimated user's movement to change analysis algorithms for analyzing the detection output from the sensor to determine a tempo of the user's movement. .Iaddend.

.Iadd.38. The information processing apparatus according to claim 37, wherein the circuitry is configured to reproduce content in accordance with the tempo of the user's movement. .Iaddend.

.Iadd.39. The information processing apparatus according to claim 38, further comprising a storage, wherein the circuitry is configured to reproduce the content at least in part by retrieving the content from the storage in accordance with the tempo of the user's movement. .Iaddend.

.Iadd.40. The information processing apparatus according to claim 38, wherein the circuitry is configured to: select, based on the tempo of the user's movement, a playlist from a plurality of playlists, and select the content from the playlist. .Iaddend.

.Iadd.41. The information processing apparatus according to claim 37, further comprising the sensor, wherein the circuitry is further configured to receive the detection output from the sensor. .Iaddend.

.Iadd.42. The information processing apparatus according to claim 37, wherein: the information processing apparatus comprises a first device comprising the circuitry; the sensor is separate from the first device; and the circuitry is further configured to receive the detection output from the sensor via wireless communication. .Iaddend.

.Iadd.43. The information processing apparatus according to claim 42, further comprising a second device comprising the sensor, wherein the second device is a wearable device attached to the user. .Iaddend.

.Iadd.44. The information processing apparatus according to claim 43, wherein the second device is a headphone. .Iaddend.

.Iadd.45. The information processing apparatus according to claim 37, wherein the plurality of movements comprises at least one movement of walking and at least one movement of running. .Iaddend.

.Iadd.46. The information processing apparatus according to claim 37, wherein the circuitry is configured to determine whether the user's movement is walking or running based on periodicity of peaks in the signal derived from the detection output. .Iaddend.

.Iadd.47. The information processing apparatus according to claim 46, wherein the signal is a waveform. .Iaddend.

.Iadd.48. The information processing apparatus according to claim 37, wherein the circuitry is configured to determine whether the user's movement is walking or running based on amplitude of peaks in the signal derived from the detection output. .Iaddend.

.Iadd.49. The information processing apparatus according to claim 48, wherein the signal is a waveform. .Iaddend.

.Iadd.50. The information processing apparatus according to claim 37, wherein the circuitry is configured to use an algorithm based on a manner in which the sensor is attached to the user's body. .Iaddend.

.Iadd.51. The information processing apparatus according to claim 50, wherein the sensor is hung from the user's neck by a neck strap. .Iaddend.

.Iadd.52. The information processing apparatus according to claim 50, wherein the sensor is attached to a piece of clothing worn by the user. .Iaddend.

.Iadd.53. The information processing apparatus according to claim 50, wherein the sensor is held in a bag carried by the user. .Iaddend.

.Iadd.54. An information processing apparatus comprising circuitry including at least one processor and at least one memory configured to: identify, based on a detection output from a sensor, an estimated user's movement from a plurality of movements having different signals; and determine a tempo of the user's movement by using the detection output and an algorithm corresponding to the estimated user's movement; and determine the tempo of the user's movement at least in part by using the estimated user's movement to change analysis algorithms for analyzing the detection output from the sensor. .Iaddend.

.Iadd.55. The information processing apparatus according to claim 54, wherein the circuitry is configured to reproduce content in accordance with the tempo of the user's movement. .Iaddend.

.Iadd.56. The information processing apparatus according to claim 55, further comprising a storage, wherein the circuitry is configured to reproduce the content at least in part by retrieving the content from the storage in accordance with the tempo of the user's movement. .Iaddend.

.Iadd.57. The information processing apparatus according to claim 56, further comprising the sensor, wherein the circuitry is further configured to receive the detection output from the sensor. .Iaddend.

.Iadd.58. The information processing apparatus according to claim 56, wherein: the information processing apparatus comprises a first device comprising the circuitry; the sensor is separate from the first device; and the circuitry is further configured to receive the detection output from the sensor via wireless communication. .Iaddend.

.Iadd.59. The information processing apparatus according to claim 58, further comprising a second device comprising the sensor, wherein the second device is a wearable device attached to the user. .Iaddend.

.Iadd.60. The information processing apparatus according to claim 59, wherein the second device is a headphone. .Iaddend.

.Iadd.61. The information processing apparatus according to claim 55, wherein the circuitry is configured to: select, based on the tempo of the user's movement, a playlist from a plurality of playlists, and select the content from the playlist. .Iaddend.

.Iadd.62. The information processing apparatus according to claim 54, wherein the plurality of movements comprises at least one movement of walking and at least one movement of running. .Iaddend.

.Iadd.63. The information processing apparatus according to claim 54, wherein the circuitry is configured to determine whether the user's movement is walking or running based on periodicity of peaks in the signal derived from the detection output. .Iaddend.

.Iadd.64. The information processing apparatus according to claim 63, wherein the signal is a waveform. .Iaddend.

.Iadd.65. The information processing apparatus according to claim 54, wherein the circuitry is configured to determine whether the user's movement is walking or running based on amplitude of peaks in the signal derived from the detection output. .Iaddend.

.Iadd.66. The information processing apparatus according to claim 65, wherein the signal is a waveform. .Iaddend.

.Iadd.67. The information processing apparatus according to claim 54, wherein the circuitry is configured to use an algorithm based on a manner in which the sensor is attached to the user's body. .Iaddend.

.Iadd.68. The information processing apparatus according to claim 67, wherein the sensor is hung from the user's neck by a neck strap. .Iaddend.

.Iadd.69. The information processing apparatus according to claim 67, wherein the sensor is attached to a piece of clothing worn by the user. .Iaddend.

.Iadd.70. The information processing apparatus according to claim 67, wherein the sensor is held in a bag carried by the user. .Iaddend.

.Iadd.71. An information processing method comprising acts of: identifying, based on a detection output from a sensor, an estimated user's movement from a plurality of movements having different degrees of intensity; using the estimated user's movement to change analysis algorithms for analyzing the detection output from the sensor to determine a tempo of the user's movement; and supplying content in accordance with the tempo of the user's movement. .Iaddend.

.Iadd.72. An information processing method comprising acts of: identifying, based on a detection output from a sensor, an estimated user's movement from a plurality of different movements; and using the estimated user's movement to change analysis algorithms for analyzing the detection output from the sensor to determine a tempo of the user's movement. .Iaddend.

.Iadd.73. An information processing apparatus comprising circuitry including at least one processor and at least one memory configured to: determine an algorithm corresponding to a manner in which a sensor is attached to a user's body; use the algorithm to determine information relating to the user's movement, wherein the information relating to the user's movement comprises a tempo of the user's movement; identify, based on a detection output from the sensor, an estimated user's movement from a plurality of movements having different signals; determine the tempo of the user's movement at least in part by using the estimated user's movement to change analysis algorithms for analyzing the detection output from the sensor; and provide an output based on the information relating to the user's movement. .Iaddend.

.Iadd.74. The information processing apparatus according to claim 73, wherein the output comprises content reproduced at the information processing apparatus, and wherein the circuitry is configured to reproduce the content in accordance with the tempo of the user's movement. .Iaddend.

.Iadd.75. The information processing apparatus according to claim 74, further comprising a storage, wherein the circuitry is configured to reproduce the content at least in part by retrieving the content from the storage in accordance with the tempo of the user's movement. .Iaddend.

.Iadd.76. The information processing apparatus according to claim 73, further comprising the sensor, wherein the circuitry is further configured to receive the detection output from the sensor. .Iaddend.

.Iadd.77. The information processing apparatus according to claim 76, wherein: the information processing apparatus comprises a first device comprising the circuitry; the sensor is separate from the first device; and the circuitry is further configured to receive the detection output from the sensor via wireless communication. .Iaddend.

.Iadd.78. The information processing apparatus according to claim 77, further comprising a second device comprising the sensor, wherein the second device is a wearable device attached to the user. .Iaddend.

.Iadd.79. The information processing apparatus according to claim 78, wherein the second device is a headphone. .Iaddend.

.Iadd.80. The information processing apparatus according to claim 73, wherein the plurality of movements comprises at least one movement of walking and at least one movement of running. .Iaddend.

.Iadd.81. The information processing apparatus according to claim 73, wherein the circuitry is configured to determine whether the user's movement is walking or running based on periodicity of peaks in the signal derived from the detection output. .Iaddend.

.Iadd.82. The information processing apparatus according to claim 81, wherein the signal is a waveform. .Iaddend.

.Iadd.83. The information processing apparatus according to claim 73, wherein the circuitry is configured to determine whether the user's movement is walking or running based on amplitude of peaks in the signal derived from the detection output. .Iaddend.

.Iadd.84. The information processing apparatus according to claim 83, wherein the signal is a waveform. .Iaddend.

.Iadd.85. The information processing apparatus according to claim 73, wherein the circuitry is configured to: select, based on the tempo of the user's movement, a playlist from a plurality of playlists; and select content from the playlist. .Iaddend.

.Iadd.86. The information processing apparatus according to claim 73, wherein the sensor is hung from the user's neck by a neck strap. .Iaddend.

.Iadd.87. The information processing apparatus according to claim 73, wherein the sensor is attached to a piece of clothing worn by the user. .Iaddend.

.Iadd.88. The information processing apparatus according to claim 73, wherein the sensor is held in a bag carried by the user. .Iaddend.
Description



CROSS REFERENCES TO RELATED APPLICATIONS

.[.The present invention contains subject matter related to Japanese Patent Application JP 2006-040052 filed in the Japanese Patent Office on Feb. 17, 2006, the entire contents of which being incorporated herein by reference..]. .Iadd.This is a reissue of U.S. application Ser. No. 11/702,483, filed Feb. 5, 2007, now U.S. Pat. No. 8,311,654, titled "CONTENT REPRODUCING APPARATUS, AUDIO REPRODUCING APPARATUS AND CONTENT REPRODUCING METHOD," which claims priority to Japanese Patent Application No. 2006-040052, filed Feb. 17, 2006, which is hereby incorporated by reference in its entirety. .Iaddend.

BACKGROUND OF THE INVENTION

1. Field of the Invention

The present invention relates to a content reproducing apparatus, an audio reproducing apparatus, and a content reproducing method.

2. Description of the Related Art

In recent years, growing numbers of people, increasingly conscious of their health conditions, have come to take up walking, jogging, or running as a preferred way to maintain and improve their health or stay generally in shape. To obtain a certain level of salutary effects from such activities usually demands the people to spend suitably prolonged periods of time on their athletic pursuit.

There have been proposed a number of audio reproducing apparatuses designed to support people in walking or running. Some of the proposed apparatuses are disclosed illustratively in Japanese Patent Laid-Open Nos. 2001-299980, 2003-177749, and 2005-156641. One such apparatus is structured to be easy to carry by a user and stores songs of variable tempos. When the user takes a walk, for example, the apparatus detects the tempo of the walking and lets the user listen to songs of the tempo fit for the detected pace of walking. The tempo of walking is represented illustratively by the number of steps per unit time (e.g., per minute) and the tempo of songs by the number of beats per minute.

For example, if the walking tempo is 120 bpm (beats per minute), then the apparatus reproduces songs at a tempo of 120 bpm, such as marches. This type of audio reproducing apparatus allows the user to walk rhythmically in keeping with the tempo of the songs being played. The apparatus is thus supposed to afford the user a pleasing walking experience.

In this specification, the terms "walking" and "running" will be used separately only if these two activities need to be distinguished from each other. If there is no specific need to separate these activities, they may be simply referred to as walking or walk/run.

SUMMARY OF THE INVENTION

Walking-support audio reproducing apparatuses of the above-outlined type generally utilize acceleration sensors to detect the user's bodily movements in terms of acceleration. The acceleration thus detected and output by the sensor is analyzed so as to determine the tempo of the user's walking.

FIGS. 11A and 11B show typical waveforms derived from a detection output from an acceleration sensor. FIG. 11A gives a waveform of a user during walking, and FIG. 11B illustrates a waveform of the same user during running. In both cases, the waveforms were acquired by a walking-support audio reproducing apparatus hung by a neck strap from the neck of the user who was walking or running. In FIGS. 11A and 11B, the horizontal axis stands for time and the vertical axis for the output voltage (in mV) from the acceleration sensor.

In these waveforms, the peaks indicated with small circles represent changes in acceleration caused by the impact of the user's foot hitting the ground. The periodicity of these peaks thus corresponds to the tempo of walking. The peaks with no circles attached stand for changes in acceleration caused by the audio reproducing apparatus swaying by itself or hitting the user's body during swing motion. As such, the latter peaks may be regarded as noise. With these characteristics taken into consideration, analyses of the waveforms in FIGS. 11A and 11B derived from the detection output from the sensor should permit detection of the user's walking tempo.

In practice, however, most apparatuses of the above type do not take into account the noise experienced in analyzing the walking tempo based on the detection output as shown in FIGS. 11A and 11B. In the waveforms of FIGS. 11A and 11B, a noise-incurred peak detected near a midpoint between two adjacent circle-marked peaks representative of the walking tempo can be interpreted erroneously as another peak attributable to walking. Because such false measurements are apparently consistent with the walking-triggered peak pattern, the audio reproducing apparatus often leaves the error uncorrected. In addition, unlike in normal times the audio reproducing apparatus often has difficulty in correctly performing spectrum analysis and autocorrelation calculations of the user's movements during transient times. In such cases, the tempo detected from walking can take on a highly unlikely value.

The present invention has been made in view of the above circumstances and provides arrangements for overcoming the above and other deficiencies of the related art.

In carrying out the invention and according to one embodiment thereof, there is provided a content reproducing apparatus including: a sensor; a discrimination circuit configured to discriminate whether a movement of a user is a first movement or a second movement based on a detection output from the sensor; a storage configured to store contents; a reproduction circuit configured to reproduce the contents; and a control circuit configured to supply the reproduction circuit with contents retrieved from the storage in accordance with a discrimination output from the discrimination circuit.

Preferably, the content reproducing apparatus may further include an analysis circuit configured to analyze tempos of the first movement or the second movement of the user in accordance with the detection output from the sensor. The analysis circuit may change analysis algorithms for analyzing the tempos based on the discrimination output from the discrimination circuit and the control circuit may retrieve contents from the storage in accordance with the tempo analyzed by the analysis circuit.

Preferably, the first movement and the second movement of the user may be walking and running respectively.

According to another embodiment of the present invention, there is provided a content reproducing method including the steps of: discriminating whether a movement of a user is a first movement or a second movement based on a detection output from a sensor; and supplying a reproduction circuit with contents retrieved from a storage storing the contents in accordance with a discrimination output from the discriminating step.

According to a further embodiment of the present invention, there is provided a storage medium which stores a computer-readable program for causing a computer to execute a procedure including the steps of: discriminating whether a movement of a user is a first movement or a second movement based on a detection output from a sensor; and supplying a reproduction circuit with contents retrieved from a storage storing the contents in accordance with a discrimination output from the discriminating step.

According to an embodiment of the present invention, as outlined above, the analysis algorithms in use are changed between walking and running. That means an optimal algorithm can be selected to analyze the tempos of walking or running. The selective algorithm usage translates into appreciably fewer errors in the result of the analysis than before.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a schematic flow diagram showing how an embodiment of the present invention is structured;

FIG. 2 is a tabular view explanatory of the present invention;

FIG. 3 is a schematic view of lists explanatory of the present invention;

FIG. 4 is a schematic flow diagram showing part of the embodiment of the present invention;

FIG. 5 is a graphic representation explanatory of the present invention;

FIG. 6 is another graphic representation explanatory of the present invention;

FIG. 7 is a schematic view with tables explanatory of the present invention;

FIG. 8 is another graphic representation explanatory of the present invention;

FIGS. 9A and 9B are tabular views explanatory of the present invention;

FIG. 10 is a diagrammatic view explanatory of the present invention; and

FIGS. 11A and 11B are waveform charts explanatory of the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

(1) Overview of the Present Invention

In the past, analyzing the detection output from the acceleration sensor often led to errors as mentioned above. That was because the tempos of the user's walk/run were obtained using the same analysis algorithm regardless of the difference between walking and running in terms of waveforms derived from the acceleration sensor detection output, as illustrated in FIGS. 11A and 11B.

In view of such circumstances, the present invention envisages brining about the following four major phases:

(A) The detection output from the acceleration sensor is analyzed to discriminate whether the user's movement is walking or running.

(B) The detection output from the acceleration sensor is analyzed to obtain the tempos of the user's walking or running.

(C) Upon analysis in phase (B) above, analysis algorithms are changed between walking and running.

(D) The changing of the analysis algorithms in phase (C) above is based on a discrimination output from phase (A) above.

(2) Discrimination Between Walking and Running

As shown in FIGS. 11A and 11B, the peaks in waveforms derived from the detection output from the acceleration sensor differ significantly between walking and running in periodicity and amplitude. The waveform patterns also differ appreciably between the two modes of physical activity. It is thus possible to discriminate whether the user's movement is walking or running based on both the difference in terms of periodicity and amplitude of the waveform peaks stemming from the acceleration sensor detection output and the difference in waveform patterns.

(2-1) Difference in Peak Periodicity

Generally, the speed of walking is 50 to 100 m/min and the speed of running is 140 m/min or higher. The average human step is 70 cm for men and 65 cm for women.

It is therefore determined that the average man is walking if the number of steps taken is fewer than 143 per minute and is running if the step count is 200 per minute or larger. Likewise it is determined that the average woman is walking if the number of steps taken is fewer than 153 per minute and is running if the step count is 215 per minute or larger.

(2-2) Difference in Waveform Amplitude

The magnitude of the impact on the user's body from the user's physical activity is about 1.1 to 1.2 times the user's weight during walking and about three to four times the body weight during running. The difference in impact between the two modes of activity is attributable to the fact at least one of the user's feet is on the ground during walking while the user's both feet can be momentarily off the ground during running. It follows that walking and running can be distinguished from each other by detecting the varying amplitude in waveforms derived from the acceleration sensor detection output.

(2-3) Difference in Waveform Pattern

The periodic waveform patterns derived from the acceleration sensor detection output prove to be distinctly different between walking and running when subjected to autocorrelation calculations. Performing autocorrelation calculations on the waveforms stemming from the acceleration sensor detection output allows noise and fluctuations to be removed from the waveforms.

(2-4) How to Discriminate Between Walking and Running

According to an embodiment of the present invention, the techniques outlined in paragraphs (2-1) through (2-3) above are used to discriminate between walking and running. The result from using each of the techniques is evaluated for further discrimination between walking and running. Given the result of such discrimination, it is possible to determine an optimal algorithm for acquiring the tempos of walking or running through analysis of the acceleration sensor detection output.

(3) Preferred Embodiments

One preferred embodiment of the present invention is a walking-support audio reproducing apparatus furnished with play lists. With the tempo of the user's walking detected, the audio reproducing apparatus may reproduce songs from the play list that corresponds to the detected walking temp.

(3-1) Typical Structure of the Audio Reproducing Apparatus

FIG. 1 is a schematic flow diagram showing a typical structure of a walking-support audio reproducing apparatus 100 embodying the present invention. The audio reproducing apparatus 100 may be used either as a walking-support apparatus or as a general-purpose portable music player. Although not shown, the apparatus has a structure and a shape small enough and compact enough to be carried around by the user illustratively in his or her pocket during walking.

The audio reproducing apparatus 100 has a system control circuit 10 composed of a microcomputer. The control circuit 10 includes a CPU 11 for executing programs, a ROM (read only memory) 12 that holds various data, a RAM (random access memory) 13 that provides a work area, and a nonvolatile memory 14. The memories 12, 13 and 14 are connected to the CPU 11 via a system bus 19.

In the above setup, the nonvolatile memory 14 serves to retain diverse information about the audio reproducing apparatus 100 and its user. The nonvolatile memory 14 is illustratively made up of a flash memory and contains a conversion table such as one (CNVTBL) shown in FIG. 2.

The conversion table CNVTBL is used illustratively to convert the tempos of the user's walking and of songs into tempo numbers TN. In the conversion table CNVTBL, the tempos of the user's walking and of songs are classified into seven categories represented by serial tempo numbers TN (=1 to 7), as shown in FIG. 2 (the categories are "0 to 69 bpm," "70 to 119 bpm," . . . , "210 to 999 bpm" as indicated).

With the conversion table CNVTBL of FIG. 2 in use, a detected tempo will be converted to TN=2 if the tempo falls into the range of, say, 70 to 119 bpm, and to TN=3 if it falls illustratively into the range of 120 to 139 bpm.

The nonvolatile memory 14 also contains play lists PL(1) through PL(7) as shown in FIG. 3. The play lists PL(1) through PL(7) have songs registered therein by tempo. The numbers one through seven of the play lists correspond to the tempo numbers one through seven in the conversion table CNVTBL. The songs having the tempo applicable to a given tempo number TN are registered in the play list PL(TN) of the corresponding number.

More specifically, songs A1 through Aa with their tempos falling between zero and 69 bpm (TN=1) are registered in the play list PL(1); songs B1 through Bb with their tempos between 70 and 119 bpm (TN=2) are registered in the play list PL(2); and so on. Songs G1 through Gg with their tempos at or higher than 210 bpm (TN=7) are registered in the play list PL(7).

The audio reproducing apparatus 100 also has a storage 21. The storage 21 accumulates or stores music data and digital audio data to be reproduced as songs. For that purpose, the storage 21 is constituted by a large-capacity flash memory or by a small hard disk drive. Illustratively, the music data held in the storage 21 is digital audio data compressed in MP3 (MPEG-1/Audio Layer 3, MPEG means Motion Picture Experts Group) format.

The storage 21 is connected to the system bus 19. A reproduction circuit 22 is also connected to the system bus 19. The reproduction circuit 22 is made up of a decoder circuit, a D/A (digital to analog) converter circuit, and an output amplifier. The decoder circuit decompresses compressed music data back to the original audio data. The D/A converter circuit converts the digital audio data into an analog audio signal.

Music data retrieved from the storage 21 is supplied to the reproduction circuit 22. The reproduction circuit 22 decompresses the supplied music data and converts the decompressed data to an analog audio signal. Following the D/A conversion, the analog audio signal is output to a headphone jack 23 that is connected with headphones 60.

An interface circuit 24 is also connected to the system bus 19. Music data is fed into the control circuit 10 from an externally furnished personal computer 70 through an input connector 25 and the interface circuit 24 to be stored into the storage 21.

This embodiment of the invention is furnished with a three-dimensional acceleration sensor 31 as a detection device that detects the walking tempo of the user carrying the audio reproducing apparatus 100 around. The acceleration sensor 31 detects the motions, acceleration, vibrations, and swaying of the audio reproducing apparatus 100 representative of the user's bodily movements (i.e., in terms of acceleration). A detection output S31 from the acceleration sensor 31 is fed to a discrimination/analysis circuit 32.

The discrimination/analysis circuit 32, as will be discussed later in more detail, analyzes the detection output S31 coming from the acceleration sensor 31 so as to detect the user's walk/run tempo. Upon analysis, the discrimination/analysis circuit 32 discriminates between walking and running using the procedure discussed in the paragraph (2) above in order to effect switchover to an optimal algorithm for analyzing the walking or running.

Various operation keys are connected to the system bus 19. The system bus 19 is further connected with a display device such as an LCD (liquid crystal display) 43 by way of a display control circuit 42. In this setup, the operation keys are used illustratively to accomplish the following: selecting the audio reproducing apparatus 100 either as a general-purpose portable music player or as a walking-support apparatus; selecting any one of different operation modes; selecting songs to play; and making other settings. The LCD 43 serves to display results of the operation keys 41 having been operated and information about the song being reproduced.

(3-2) Operations

(3-2-1) Storing the Songs

The music data of a song desired to be stored into the audio reproducing apparatus 100 is prepared beforehand in compressed format on the personal computer 70. With the personal computer 70 connected to the audio reproducing apparatus 100, a suitable transfer program is carried out on the PC to designate transfer of the music data in question.

The music data prepared on the personal computer 70 is then supplied to the audio reproducing apparatus 100 through the connector 25. The supplied music data is admitted into the audio reproducing apparatus 100 through the interface circuit 24 under control of the CPU (central processing unit) 11. The data is stored into the storage 21.

(3-2-2) Creating the Play Lists PL(1) Through PL(7)

Giving a command to create play lists causes the audio reproducing apparatus 100 to create skeleton play lists PL(1) through PL(7) (i.e., play lists with no contents inside). The tempo of the song placed into the storage 21 is analyzed using the procedure discussed in the paragraph (3-2-1) above. The analyzed tempo is converted to a tempo number TN by use of the conversion table CNVTBL. The analyzed song is registered in the play list PL(TN) corresponding to the tempo number resulting from the conversion from among the play lists PL(1) through PL(7).

Illustratively, if an analysis of a given song reveals that it has a tempo of 80 bpm, the tempo is converted by the conversion table CNVTBL into TN=2. The song having that tempo is then registered in the play list PL(2).

The tempo of a given song is acquired by performing a spectrum analysis of its music data and by obtaining an autocorrelation function of the data. When the music data of a song is prepared on the personal computer 70, information indicative of the tempo of that song may be added to the music data as meta information that may later be used to identify the tempo. When the song is to be registered into any one of the play lists PL(1) through PL(7), the registration may be carried out using a file name of the corresponding music data together with the song title and the name of the artist involved.

(3-2-3) Using the Embodiment as a General-Purpose Portable Music Player, for Music Reproduction

In this case, giving a command to reproduce a stored song causes the audio reproducing apparatus 100 to retrieve the applicable music data from the storage 21. The retrieved music data is supplied to the reproduction circuit 22 for data decompression and digital-to-analog conversion.

The reproduction circuit 22 thus outputs an analog audio signal derived from the retrieved music data. The analog audio signal is fed to the headphones 60 allowing the user to listen to the reproduced song. The title of the song being reproduced is displayed on the LCD 43.

Retrieval of music data from the storage 21 is controlled in accordance with a currently established reproduction mode. That is, the retrieved music data may be subjected illustratively to single-song reproduction, all-song continuous reproduction, random reproduction, or repeat reproduction. In this manner, the audio reproducing apparatus 100 can be utilized as a general-purpose portable music player.

A command may also be given to designate one of the play lists PL(1) through PL(7) for reproduction. In such a case, only the songs registered in the designated play list are reproduced selectively. Illustratively, when going to bed, the user might want to designate the play list PL(1) to reproduce songs of slow tempos.

(3-2-4) Using the Embodiment as a Walking-Support Apparatus for Music Reproduction

In this case, the audio reproducing apparatus 100 is used to reproduce songs having tempos commensurate with the user's walking speed. Giving a command to reproduce such songs causes the acceleration sensor 31 and discrimination/analysis circuit 32 to detect the tempo of the user's walking. The walking tempo thus detected is converted by the conversion table CNVTBL into a corresponding tempo number TN. Of the play lists PL(1) through PL(7), the play list PL(TN) corresponding to the tempo number TN derived from the conversion is selected. Then one of the songs registered in the selected play list PL(TN) is selected.

The music data of the selected song is retrieved from the storage 21 and sent to the reproduction circuit 22 for data decompression and digital-to-analog conversion. By the same procedure as that discussed in the paragraph (3-2-3) above, the selected song is reproduced and listened to by use of the headphones 60. Because the tempo of the song being reproduced is commensurate with the user's walking speed, the user can walk rhythmically and pleasantly in time with the song.

During the walking, the current tempo number TN is compared with the preceding tempo number TN. A difference detected in the comparison between the two numbers indicates a change in the walking tempo. In that case, another play list PL(TN) corresponding to the current walking tempo TN is selected and songs are reproduced selectively from the newly selected play list PL(TN).

As will be discussed later, the analysis of the user's walking tempo by the discrimination/analysis circuit 32 is supplemented by the determination of whether the user's current activity is walking or running. That is, when play lists or songs are to be selected by the above-described procedure, the result of the determination of whether the user's motion comes from walking or running may be additionally taken into consideration.

(4) Typical Structure of the Discrimination/Analysis Circuit 32

FIG. 4 is a schematic flow diagram showing a typical structure of the discrimination/analysis circuit 32. As shown in FIG. 4, the discrimination/analysis circuit 32 is made up of an analysis circuit 32A and a discrimination circuit 32B. The detection output S31 from the acceleration sensor 31 is supplied to the analysis circuit 32A. By analyzing what is supplied using an appropriate analysis algorithm, the analysis circuit 32A detects the tempo of the user's walking or running. An output from the analysis circuit 32A following the detection is fed to the control circuit 10 through the system bus 19.

The detection output S31 from the acceleration sensor 31 is also supplied to the discrimination circuit 32B. In this setup, the discrimination circuit 32B is constituted by a period detection circuit 321, an amplitude detection circuit 322, an autocorrelation circuit 323, and a determination circuit 324. The circuits 321 through 323 are each designed to process the detection output S31 by a different method when detecting the probability of the user's movement being either walking or running. The determination circuit 324 evaluates outputs S21 through S23 coming from the circuits 321 through 323, thereby determining whether the user is walking or running.

Illustratively, the period detection circuit 321 subject the detection output S31 from the acceleration circuit 31 to spectrum analysis in order to detect periodicity of peaks (marked by small circles in FIG. 11). On the basis of the periodicity thus detected, the period detection circuit 321 acquires the probability of the user either walking or running by use of the procedure discussed in the paragraph (2-1) above. The resulting detection output S21 from the period detection circuit 321 is fed to the determination circuit 324.

The amplitude detection circuit 322 illustratively demodulates the detection output S31 from the acceleration sensor 31 to detect the amplitude of the peaks (marked by small circles in FIG. 11) in the output S31. Based on the values of amplitude thus detected, the amplitude detection circuit 322 acquires the probability of the user either walking or running by use of the procedure discussed above in the paragraph (2-2) above. The resulting detection output S22 from the amplitude detection circuit 322 is forwarded to the determination circuit 324.

The autocorrelation circuit 323 performs autocorrelation calculations on the detection output S31 from the acceleration sensor 31 to obtain the magnitude of autocorrelation in the output S31. On the basis of the magnitude of autocorrelation thus acquired, the autocorrelation circuit 323 detects the probability of the user either walking or running by use of the procedure discussed in the paragraph (2-3) above. The resulting detection output S23 from the autocorrelation circuit 323 is sent to the determination circuit 324.

The determination circuit 324 evaluates the detection outputs S21 through S23 coming from the circuits 321 through 323 respectively in order to determine whether the user's activity is walking or running. The result of the determination is output as a discrimination output S24 of the discrimination circuit 32B. Illustratively, if the detection outputs S21 through S23 each indicate the probability of the user's walking or running in percentage points, these values are weighted before they are added up. The addition allows the determination circuit 324 to determine whether the user is walking or running. If the detection outputs S21 through S23 each indicate the probability of walking or running in binary form, the determination circuit 324 may determine whether the user is walking or running by a majority decision derived from the outputs S21 through S23.

The discrimination output S24 from the discrimination circuit 32B is supplied as a control parameter to the analysis circuit 32A. Given the discrimination output S24, the analysis circuit 32A switches accordingly to a suitable algorithm for analyzing the detection output S31 from the acceleration sensor 31. The analysis algorithm derived from the switchover is an optimal algorithm for analyzing the tempo of the user's walking or running. In this setup, the discrimination output S24 is also supplied to the control circuit 10.

The user's walking or running is analyzed specifically by different methods as follows: in the waveform of the detection output S31 of FIG. 11A observed during walking, near the peaks (marked by small circles) correctly representing the tempo of walking also appear other peaks that can lead to error. Since these peaks do not differ considerably in amplitude, they are first subjected to autocorrelation calculations. This process roughly determines the periodicity of the peaks. With the coarse periodicity thus determined and with a given peak spotted, a time frame is assumed in the probable position of the next peak, to see if such a peak exists in the frame. If a plurality of peaks are found in the time frame, the frame is narrowed so as to select a peak that is close to a peak period. If no such peak is found in the time frame, the frame is widened in the search for a periodical peak. The periodical peak thus detected is stored temporarily as the basis for detecting the next peak.

In the waveform of the detection output S31 of FIG. 11B observed during running, the detected peaks are much more distinct in amplitude than the peaks observed during walking. This makes it possible to handle only the waveform peaks that exceed a predetermined threshold level. The process is implemented using a level comparator or a nonlinear amplifier. In the case of FIG. 11B, the threshold level may be set illustratively for 1,200. With the threshold determined, peaks exceeding it are observed and the interval between two adjacent peaks is regarded as a peak period. A search is then made for a waveform peak close to the next period. The periodical peak thus detected is used also in this case as the basis for detecting the next peak.

Many people are unconsciously in the habit of exerting a more force on either foot than the other during walking or running. For that reason, it is preferred that the length of time for analysis using the above-mentioned autocorrelation calculations include a peak period of not one step but two steps. When a difference is observed between the right and left feet in terms of the force exerted thereon, this characteristic may also be taken into consideration in the search for waveform peaks representative of the tempo of walking.

The waveforms in FIGS. 11A and 11B vary depending on where the acceleration sensor 31 is installed, how the audio reproducing apparatus 100 is attached or held, what type of shoes is worn by the user, or what kind of condition the terrain or the floor is in on which to walk or run. In view of these factors, the analysis methods discussed above may be supplemented by such procedures as band-pass filtering and frequency spectrum analysis for selective application of the parameters involved to the analysis of walking or running.

Where the discrimination/analysis circuit 32 of FIG. 4 is in use, the detection output S31 from the acceleration sensor 31 is analyzed as described above for discrimination between walking and running representative of the user's movement. The output of the discrimination is used to change analysis algorithms for analyzing the tempo of walking or running. This makes it possible to adopt an optimum algorithm for analyzing the tempo of walking or running, whereby the probability of error is lowered significantly.

(5) Walking Tempo

In the paragraphs that follow, the tempo of walking in general and a typical method for creating the conversion table CNVTBL will be discussed.

(5-1) Observations

Fourteen test subjects (eight adult males and six adult females) were observed in their walking habits in daily life. The observations revealed that their walking movements could be roughly classified into four groups: low-speed walking, normal walking, jogging, and dash as shown in FIG. 5. These four groups of walking may be applied by extension to the walking activity in general in everyday life.

The test subjects were also measured for their walking tempos. The resulting measurements are shown graphically in FIG. 6. The horizontal axis of FIG. 6 stands for walking tempos (i.e., average tempo of walking per unit time) and the vertical axis denotes frequency (i.e., number of people). In FIG. 6 and subsequent figures, the walking tempos are shown rounded to increments of 10 bpm.

The measurements above reveal that the walking tempos in daily life are not uniformly distributed; they tend to be included in one of the groups. It is also revealed that the walking tempos of less than 69 bpm, 140 to 159 bpm, and 240 bpm and higher rarely occur in everyday life. For each group, it is possible to obtain the mean value, standard deviation, and coefficient of variation of the tempos involved and to estimate their ranges.

People are thought to select automatically an optimally efficient state of transport energy consumption when walking or running. The walking tempos in the range of 140 to 159 bpm come between walking and jogging and fall into the state generally known as race walking. In daily life, people rarely, if ever, walk in the state of race walking. Hence the resulting measurements obtained as described above.

Each user has a particular pattern of walking as mentioned earlier. The audio reproducing apparatus 100 is arranged to learn its user's pattern of walking. The results of such learning are then turned into the conversion table such as one (CNVTBL) shown in FIG. 2.

(5-2) Learning of Walking Tempos

For the purpose of learning, the user carries the audio reproducing apparatus 100 and takes a walk. During the walking, as shown in FIG. 7, the audio reproducing apparatus 100 measures instantaneous walking tempos MT(t) at intervals of several milliseconds to several seconds. Mean walking tempos m_MT(t) at intervals of several seconds are then calculated from the measured walking tempos MT(t). FIG. 7 shows results obtained when the audio reproducing apparatus 100 measured the walking tempos MT(t) at intervals of one second and the measurements were used as the basis for calculating the mean walking tempos m_MT(t) at intervals of five seconds.

The walking tempos m_MT(t) thus calculated are accumulated in the storage 21 of the audio reproducing apparatus 100. This is how the reproducing apparatus 100 learns the user's walking tempos m_MT(t).

Once the walking tempos are learned, the audio reproducing apparatus 100 is connected to the personal computer 70 as shown in FIG. 1. From the audio reproducing apparatus 100, the accumulated walking tempos m_MT(t) and timestamp information are transferred to the personal computer 70. If the personal computer 70 currently retains any past walking tempos m_MT(t) and timestamp information, they may be replaced by, or merged with, the newly transferred walking tempos m_MT(t) and timestamp information.

(5-3) Division of Walking Tempos into Groups

The personal computer 70 creates a histogram of walking tempo appearances based on the transferred walking tempos m_MT(t) and timestamp information. From the histogram, maximum values MD(i) max (i=1, 2, 3, . . .) are detected and the detected values are taken as vertexes representing the walking tempos classified into groups MD(i).

FIG. 8 shows a typical histogram created from the walking tempos m_MT(t). The horizontal axis of FIG. 8 stands for the walking tempos m_MT(t) and the vertical axis denotes the number of walking tempo appearances. In this histogram, the maximum values are established as MD(1)max, MD(2)max, MD(3)max, MD(4)max, and MD(5)max on the horizontal axis from left to right. These maximum values MD(n)max (n=1 to 5) are taken as vertexes each topping one of the groups MD(n) in which the walking tempos are distributed.

For each of the groups MD(n), a lower limit value MD(n) lower and an upper limit value MD(n)upper are obtained. If a given group MD(n) does not overlap with any other group, attention is paid to both ends of the group MD(n); the value on the horizontal axis at which the number of appearances is zero is taken either as the lower limit value MD(n)lower or as the upper limit value MD(n) upper.

If two groups MD(n-1) and MD(n) overlap with each other, a median value between the maximum value MD(n-1) max of the group MD(n-1) and the maximum value MD(n) max of the group MD(n) is regarded both as the upper limit value MD(n-1)upper of the group MD(n-1) and as the lower limit value MD(n)lower of the group MD(n).

If the maximum value is positioned at the top or bottom end of the histogram as in the case of the maximum value MD(5) max of the group MD(5) in FIG. 8, that maximum value and the group associated with it are ignored.

When the groups MD(n) are reorganized using the above-described procedure, it is possible to obtain four pairs of the lower limit value MD(n)lower and upper limit value MD(n) upper (n=1 to 4) from the histogram of FIG. 8, as indicated in FIG. 9A.

The values n=1, 2, 3, 4 are associated with the tempo numbers TN=2, 3, 5, 6 respectively, as shown in the right-hand side column of FIG. 9A. At the same time, the ranges of walking tempos delimited by the lower limit value MD(TN) lower and upper limit value MD(TN)upper as designated by these variables TN are registered in the conversion table CNVTBL, along with the correspondence between the ranges and the variables TN. The registrations lead to preparation of the lines indicated by the variable TN=2, 3, 5, 6 in the conversion table CNVTBL shown in FIG. 2.

Where the walking tempos m_MT(t) are less than 69 bpm (too slow), between 140 and 159 bpm (race walking), and higher than 210 bpm (too fast) in FIG. 8, they correspond to the groups MD(5), MD(6) and MD(7) (n=5, 6, 7) in FIG. 9B respectively.

The values n=5, 6, 7 are associated with the tempo numbers TN=1, 4, 7 respectively, as shown in the right-hand side column of FIG. 9B. At the same time, the ranges of walking tempos delimited by the lower limit value MD(TN)lower and upper limit value MD(TN)upper as designated by theses variables TN are registered in the conversion table CNVTBL, along with the correspondence between the ranges and the variables TN. The registrations lead to preparation of the lines indicated by the variable TN=1, 4, 7 in the conversion table CNVTBL shown in FIG. 2. FIG. 10 graphically summarizes the relationship between the ranges of walking tempos on the one hand and the tempo numbers TN on the other hand indicated in FIGS. 2, 9A and 9B.

The conversion table CNVTBL is thus created by the procedure discussed above. The created conversion table is transferred from the personal computer 70 to the audio reproducing apparatus 100 wherein the transferred table is retained illustratively in the memory 14.

(6) Conclusions

The above-described reproducing apparatus 100 analyzes the detection output S31 from the acceleration sensor 31 to discriminate whether the user's movement is walking or running, and changes analysis algorithms for detecting the tempos of the user's walking or running determined on the basis of the discrimination output. This makes it possible to use an optimal algorithm for analyzing the walking or running tempos and thereby to reduce errors significantly in the analysis.

Illustratively, the audio reproducing apparatus 100 creates the play lists PL(1) through PL(7) by walking tempo as shown in FIG. 3, discriminates which of the play lists PL(1) through PL(7) corresponds to the currently detected walking tempo m_MT(t), and selectively reproduces songs from the play list thus discriminated. That is, whether the walking tempo is slow or fast, the songs to be reproduced are automatically changed to suit the user's current movement. The user has a pleasant feeling that the reproducing apparatus 100 is selectively reproducing songs to match his or her physical activity at the present moment.

Because the play lists PL(1) through PL(7) have been acquired through learning, there is no need for the user to fine-tune the listed choices or make additional adjustments to the lists. Furthermore, the listings are affected very little by the user's physical conditions, variations among individual users, or fluctuations in a given user's walking.

(7) Others

In the foregoing description, the embodiment of the invention was shown using the seven play lists PL(1) through PL(7). Alternatively, there may be prepared one play list of songs at tempos of 69 bpm or lower, 14 play lists covering songs at tempos between 70 and 209 bpm in increments of 10 bpm, and one play list of songs at tempos of 210 bpm or higher. Any of these play lists may be selected for reproduction of the songs contained inside in keeping with the detected tempo of walking or running. As another alternative, there may be prepared two play lists, one covering songs at tempos of 139 bpm or lower and the other containing songs at tempos of 140 bpm or higher. Either of the two play lists may then be selected for reproduction of the songs contained inside in keeping with the detected tempo of walking or running.

The personal computer 70 may create the play lists PL(1) through PL(7) and transfer the created lists to the audio reproducing apparatus 100 together with the digital audio data constituting the songs held inside the lists. It is also possible for the audio reproducing apparatus 100 to have a standard conversion table CNVTBL installed therein beforehand. Every time the user takes a walk, the standard conversion table CNVTBL may be corrected or adjusted to reflect the user's own walking pattern. In this case, the longer the audio reproducing apparatus 100 is used, the more accurate the selection of songs to be reproduced in accordance with the user's unique walking pattern.

In the above-described example, the audio reproducing apparatus 100 was described as being hung from the user's neck by a neck strap. Alternatively, it might happen that the user wants to carry the apparatus around in a pocket of the clothes he or she wears or in a bag he or she carries. In such cases, appropriate analysis algorithms may be devised to address the tempos of the user keeping the apparatus in his or her pocket or bag while walking or running.

The discrimination/analysis circuit 32 may be implemented either by hardware such as a DSP (digital signal processor) or by software made up of programs performed by the CPU 11. Whenever any song to be reproduced is changed from the initial category, that change may be evaluated in terms of how the operation keys 41 are operated. The evaluations may then be used as the basis for subsequently selecting songs more to the user's taste. It is also possible for the user of the audio reproducing apparatus 100 to establish conditions for changing songs or to set or vary the lower limit value MD(n)lower and upper limit value MD(n)upper by himself or herself by taking a look at the histogram of FIG. 6.

The acceleration sensor 31 may be separated from the audio reproducing apparatus 100 and attached to, say, the headphones 60. In this case, the detection signal from the acceleration sensor 31 may be sent to the discrimination/analysis circuit 32 in wired or wireless fashion. The acceleration sensor 31 may be replaced by a speed sensor or by a gyro sensor. Furthermore, the music data may be integrated with video digital data.

It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and alterations may occur depending on design requirements and other factor in so far as they are within the scope of the appended claims or the equivalents thereof.

* * * * *

References


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed