Methods for generating a unified virtual snapshot and systems thereof

Nicklin , et al. January 8, 2

Patent Grant 8352785

U.S. patent number 8,352,785 [Application Number 12/334,281] was granted by the patent office on 2013-01-08 for methods for generating a unified virtual snapshot and systems thereof. This patent grant is currently assigned to F5 Networks, Inc.. Invention is credited to Jonathan Case Nicklin, Harald Skardal.


United States Patent 8,352,785
Nicklin ,   et al. January 8, 2013

Methods for generating a unified virtual snapshot and systems thereof

Abstract

A method, computer readable medium, and system for generating a unified virtual snapshot in accordance with embodiments of the present invention includes invoking with a file virtualization system a capture of a plurality of physical snapshots. Each of the physical snapshots comprises content at a given point in time in one of the plurality of data storage systems. A unified virtual snapshot is generated with the file virtualization system based on the captured plurality of the physical snapshots.


Inventors: Nicklin; Jonathan Case (Newburgh, MA), Skardal; Harald (Nashua, NH)
Assignee: F5 Networks, Inc. (Seattle, WA)
Family ID: 47428018
Appl. No.: 12/334,281
Filed: December 12, 2008

Related U.S. Patent Documents

Application Number Filing Date Patent Number Issue Date
61013539 Dec 13, 2007

Current U.S. Class: 714/15; 707/649; 714/6.2; 711/162
Current CPC Class: G06F 16/188 (20190101); G06F 16/148 (20190101); G06F 2201/84 (20130101); G06F 11/1448 (20130101); G06F 2201/82 (20130101); G06F 2201/815 (20130101)
Current International Class: G06F 11/00 (20060101)
Field of Search: ;714/7 ;711/162 ;707/649

References Cited [Referenced By]

U.S. Patent Documents
4993030 February 1991 Krakauer et al.
5218695 June 1993 Noveck et al.
5303368 April 1994 Kotaki
5473362 December 1995 Fitzgerald et al.
5511177 April 1996 Kagimasa et al.
5537585 July 1996 Blickenstaff et al.
5548724 August 1996 Akizawa et al.
5550965 August 1996 Gabbe et al.
5583995 December 1996 Gardner et al.
5586260 December 1996 Hu
5590320 December 1996 Maxey
5649194 July 1997 Miller et al.
5649200 July 1997 Leblang et al.
5668943 September 1997 Attanasio et al.
5692180 November 1997 Lee
5721779 February 1998 Funk
5724512 March 1998 Winterbottom
5806061 September 1998 Chaudhuri et al.
5832496 November 1998 Anand et al.
5832522 November 1998 Blickenstaff et al.
5838970 November 1998 Thomas
5862325 January 1999 Reed et al.
5884303 March 1999 Brown
5893086 April 1999 Schmuck et al.
5897638 April 1999 Lasser et al.
5905990 May 1999 Inglett
5917998 June 1999 Cabrera et al.
5920873 July 1999 Van Huben et al.
5937406 August 1999 Balabine et al.
5999664 December 1999 Mahoney et al.
6012083 January 2000 Savitzky et al.
6029168 February 2000 Frey
6044367 March 2000 Wolff
6047129 April 2000 Frye
6072942 June 2000 Stockwell et al.
6078929 June 2000 Rao
6085234 July 2000 Pitts et al.
6088694 July 2000 Burns et al.
6128627 October 2000 Mattis et al.
6128717 October 2000 Harrison et al.
6161145 December 2000 Bainbridge et al.
6161185 December 2000 Guthrie et al.
6181336 January 2001 Chiu et al.
6202156 March 2001 Kalajan
6223206 April 2001 Dan et al.
6233648 May 2001 Tomita
6237008 May 2001 Beal et al.
6256031 July 2001 Meijer et al.
6282610 August 2001 Bergsten
6289345 September 2001 Yasue
6308162 October 2001 Ouimet et al.
6324581 November 2001 Xu et al.
6339785 January 2002 Feigenbaum
6349343 February 2002 Foody et al.
6374263 April 2002 Bunger et al.
6389433 May 2002 Bolosky et al.
6393581 May 2002 Friedman et al.
6397246 May 2002 Wolfe
6412004 June 2002 Chen et al.
6438595 August 2002 Blumenau et al.
6477544 November 2002 Bolosky et al.
6487561 November 2002 Ofek et al.
6493804 December 2002 Soltis et al.
6516350 February 2003 Lumelsky et al.
6516351 February 2003 Borr
6549916 April 2003 Sedlar
6553352 April 2003 Delurgio et al.
6556997 April 2003 Levy
6556998 April 2003 Mukherjee et al.
6601101 July 2003 Lee et al.
6606663 August 2003 Liao et al.
6612490 September 2003 Herrendoerfer et al.
6721794 April 2004 Taylor et al.
6738790 May 2004 Klein et al.
6742035 May 2004 Zayas et al.
6748420 June 2004 Quatrano et al.
6757706 June 2004 Dong et al.
6775672 August 2004 Mahalingam et al.
6775673 August 2004 Mahalingam et al.
6775679 August 2004 Gupta
6782450 August 2004 Arnott et al.
6801960 October 2004 Ericson et al.
6826613 November 2004 Wang et al.
6839761 January 2005 Kadyk et al.
6847959 January 2005 Arrouye et al.
6847970 January 2005 Keller et al.
6850997 February 2005 Rooney et al.
6871245 March 2005 Bradley
6889249 May 2005 Miloushev et al.
6922688 July 2005 Frey, Jr.
6934706 August 2005 Mancuso et al.
6938039 August 2005 Bober et al.
6938059 August 2005 Tamer et al.
6959373 October 2005 Testardi
6961815 November 2005 Kistler et al.
6973455 December 2005 Vahalia et al.
6973549 December 2005 Testardi
6985936 January 2006 Agarwalla et al.
6985956 January 2006 Luke et al.
6986015 January 2006 Testardi
6990547 January 2006 Ulrich et al.
6990667 January 2006 Ulrich et al.
6996841 February 2006 Kadyk et al.
7003533 February 2006 Noguchi et al.
7006981 February 2006 Rose et al.
7010553 March 2006 Chen et al.
7013379 March 2006 Testardi
7020644 March 2006 Jameson
7020699 March 2006 Zhang et al.
7024427 April 2006 Bobbitt et al.
7051112 May 2006 Dawson
7054998 May 2006 Arnott et al.
7072917 July 2006 Wong et al.
7089286 August 2006 Malik
7111115 September 2006 Peters et al.
7113962 September 2006 Kee et al.
7120128 October 2006 Banks et al.
7120746 October 2006 Campbell et al.
7127556 October 2006 Blumenau et al.
7133967 November 2006 Fujie et al.
7143146 November 2006 Nakatani et al.
7146524 December 2006 Patel et al.
7152184 December 2006 Maeda et al.
7155466 December 2006 Rodriguez et al.
7165095 January 2007 Sim
7167821 January 2007 Hardwick et al.
7171496 January 2007 Tanaka et al.
7173929 February 2007 Testardi
7194579 March 2007 Robinson et al.
7234074 June 2007 Cohn et al.
7280536 October 2007 Testardi
7284150 October 2007 Ma et al.
7293097 November 2007 Borr
7293099 November 2007 Kalajan
7293133 November 2007 Colgrove et al.
7343398 March 2008 Lownsbrough
7346664 March 2008 Wong et al.
7383288 June 2008 Miloushev et al.
7401220 July 2008 Bolosky et al.
7406484 July 2008 Srinivasan et al.
7415488 August 2008 Muth et al.
7415608 August 2008 Bolosky et al.
7440982 October 2008 Lu et al.
7457982 November 2008 Rajan
7467158 December 2008 Marinescu
7475241 January 2009 Patel et al.
7477796 January 2009 Sasaki et al.
7509322 March 2009 Miloushev et al.
7512673 March 2009 Miloushev et al.
7519813 April 2009 Cox et al.
7562110 July 2009 Miloushev et al.
7571168 August 2009 Bahar et al.
7574433 August 2009 Engel
7587471 September 2009 Yasuda et al.
7590747 September 2009 Coates et al.
7599941 October 2009 Bahar et al.
7610307 October 2009 Havewala et al.
7610390 October 2009 Yared et al.
7624109 November 2009 Testardi
7639883 December 2009 Gill
7644109 January 2010 Manley et al.
7653699 January 2010 Colgrove et al.
7689596 March 2010 Tsunoda
7694082 April 2010 Golding et al.
7711771 May 2010 Kirnos
7734603 June 2010 McManis
7743035 June 2010 Chen et al.
7752294 July 2010 Meyer et al.
7769711 August 2010 Srinivasan et al.
7788335 August 2010 Miloushev et al.
7822939 October 2010 Veprinsky et al.
7831639 November 2010 Panchbudhe et al.
7849112 December 2010 Mane et al.
7870154 January 2011 Shitomi et al.
7877511 January 2011 Berger et al.
7885970 February 2011 Lacapra
7913053 March 2011 Newland
7953701 May 2011 Okitsu et al.
7958347 June 2011 Ferguson
8005953 August 2011 Miloushev et al.
2001/0014891 August 2001 Hoffert et al.
2001/0047293 November 2001 Waller et al.
2001/0051955 December 2001 Wong
2002/0035537 March 2002 Waller et al.
2002/0059263 May 2002 Shima et al.
2002/0065810 May 2002 Bradley
2002/0073105 June 2002 Noguchi et al.
2002/0083118 June 2002 Sim
2002/0087887 July 2002 Busam et al.
2002/0120763 August 2002 Miloushev et al.
2002/0133330 September 2002 Loisey et al.
2002/0133491 September 2002 Sim et al.
2002/0138502 September 2002 Gupta
2002/0143909 October 2002 Botz et al.
2002/0147630 October 2002 Rose et al.
2002/0150253 October 2002 Brezak et al.
2002/0156905 October 2002 Weissman
2002/0160161 October 2002 Misuda
2002/0161911 October 2002 Pinckney, III et al.
2002/0188667 December 2002 Kirnos
2003/0009429 January 2003 Jameson
2003/0012382 January 2003 Ferchichi et al.
2003/0028514 February 2003 Lord et al.
2003/0033308 February 2003 Patel et al.
2003/0033535 February 2003 Fisher et al.
2003/0061240 March 2003 McCann et al.
2003/0065956 April 2003 Belapurkar et al.
2003/0115218 June 2003 Bobbitt et al.
2003/0115439 June 2003 Mahalingam et al.
2003/0135514 July 2003 Patel et al.
2003/0149781 August 2003 Yared et al.
2003/0159072 August 2003 Bellinger et al.
2003/0171978 September 2003 Jenkins et al.
2003/0177364 September 2003 Walsh et al.
2003/0177388 September 2003 Botz et al.
2003/0204635 October 2003 Ko et al.
2004/0003266 January 2004 Moshir et al.
2004/0006575 January 2004 Visharam et al.
2004/0010654 January 2004 Yasuda et al.
2004/0025013 February 2004 Parker et al.
2004/0028043 February 2004 Maveli et al.
2004/0028063 February 2004 Roy et al.
2004/0030857 February 2004 Krakirian et al.
2004/0054777 March 2004 Ackaouy et al.
2004/0093474 May 2004 Lin et al.
2004/0098383 May 2004 Tabellion et al.
2004/0098595 May 2004 Aupperle et al.
2004/0133573 July 2004 Miloushev et al.
2004/0133577 July 2004 Miloushev et al.
2004/0133606 July 2004 Miloushev et al.
2004/0133607 July 2004 Miloushev et al.
2004/0133652 July 2004 Miloushev et al.
2004/0139355 July 2004 Axel et al.
2004/0148380 July 2004 Meyer et al.
2004/0153479 August 2004 Mikesell et al.
2004/0181605 September 2004 Nakatani et al.
2004/0199547 October 2004 Winter et al.
2004/0236798 November 2004 Srinivasan et al.
2005/0021615 January 2005 Arnott et al.
2005/0050107 March 2005 Mane et al.
2005/0091214 April 2005 Probert et al.
2005/0108575 May 2005 Yung
2005/0114291 May 2005 Becker-Szendy et al.
2005/0114701 May 2005 Atkins et al.
2005/0187866 August 2005 Lee
2005/0189501 September 2005 Sato et al.
2005/0246393 November 2005 Coates et al.
2005/0289109 December 2005 Arrouye et al.
2005/0289111 December 2005 Tribble et al.
2006/0010502 January 2006 Mimatsu et al.
2006/0075475 April 2006 Boulos et al.
2006/0080353 April 2006 Miloushev et al.
2006/0106882 May 2006 Douceur et al.
2006/0112151 May 2006 Manley et al.
2006/0123062 June 2006 Bobbitt et al.
2006/0161518 July 2006 Lacapra
2006/0167838 July 2006 Lacapra
2006/0179261 August 2006 Rajan
2006/0184589 August 2006 Lees et al.
2006/0190496 August 2006 Tsunoda
2006/0200470 September 2006 Lacapra et al.
2006/0212746 September 2006 Amegadzie et al.
2006/0224687 October 2006 Popkin et al.
2006/0230265 October 2006 Krishna
2006/0242179 October 2006 Chen et al.
2006/0259949 November 2006 Schaefer et al.
2006/0271598 November 2006 Wong et al.
2006/0277225 December 2006 Mark et al.
2006/0282461 December 2006 Marinescu
2006/0282471 December 2006 Mark et al.
2007/0024919 February 2007 Wong et al.
2007/0027929 February 2007 Whelan
2007/0028068 February 2007 Golding et al.
2007/0088702 April 2007 Fridella et al.
2007/0136308 June 2007 Tsirigotis et al.
2007/0208748 September 2007 Li
2007/0209075 September 2007 Coffman
2007/0226331 September 2007 Srinivasan et al.
2008/0046432 February 2008 Anderson et al.
2008/0070575 March 2008 Claussen et al.
2008/0104443 May 2008 Akutsu et al.
2008/0209073 August 2008 Tang
2008/0222223 September 2008 Srinivasan et al.
2008/0243769 October 2008 Arbour et al.
2008/0282047 November 2008 Arakawa et al.
2009/0007162 January 2009 Sheehan
2009/0037975 February 2009 Ishikawa et al.
2009/0041230 February 2009 Williams
2009/0055607 February 2009 Schack et al.
2009/0077097 March 2009 Lacapra et al.
2009/0089344 April 2009 Brown et al.
2009/0094252 April 2009 Wong et al.
2009/0106255 April 2009 Lacapra et al.
2009/0106263 April 2009 Khalid et al.
2009/0132616 May 2009 Winter et al.
2009/0204649 August 2009 Wong et al.
2009/0204650 August 2009 Wong et al.
2009/0204705 August 2009 Marinov et al.
2009/0210431 August 2009 Marinkovic et al.
2009/0254592 October 2009 Marinov et al.
2010/0211547 August 2010 Kamei et al.
2011/0087696 April 2011 Lacapra
Foreign Patent Documents
2003300350 Jul 2004 AU
2512312 Jul 2004 CA
0 738 970 Oct 1996 EP
63010250 Jan 1988 JP
6-332782 Dec 1994 JP
08-328760 Dec 1996 JP
08-339355 Dec 1996 JP
9016510 Jan 1997 JP
11282741 Oct 1999 JP
566291 Dec 2008 NZ
WO 02/056181 Jul 2002 WO
WO 2004/061605 Jul 2004 WO
WO 2008/130983 Oct 2008 WO
WO 2008/147973 Dec 2008 WO

Other References

"Auspex Storage Architecture Guide," Second Edition, 2001, Auspex Systems, Inc., www.ausoex.com, last accessed on Dec. 30, 2002. cited by other .
"CSA Persistent File System Technology, Colorado Software" Architecture, Inc. White Paper, Jan. 1999, p. 1-3. cited by other .
"Distributed File System: A Logical View of Physical Storage : White Paper," 1999, Microsoft Corp., www.microsoft.com, last accessed on Dec. 20, 2002. cited by other .
"How DFS Works: Remote File Systems," Distributed File System (DFS) Technical Reference, retrieved from the Internet on Feb. 13, 2009: URL:http://technelmicrosoft.com/en-us/library/cc782417.aspx> (Mar. 2003). cited by other .
"NERSC Tutorials: I/O on the Cray T3E," chapter 8, "Disk Striping," National Energy Research Scientific Computing Center (NERSC), http://hpcf.nersc.gov, last accessed on Dec. 27, 2002. cited by other .
"Scaling Next Generation Web Infrastructure with Content-Intelligent Switching : White Paper," Apr. 2000, Alteon WebSystems, Inc., (now Nortel Networks). cited by other .
"VERITAS SANPoint Foundation Suite(tm) and SANPoint Foundation(tm) Suite HA: New VERITAS Volume Management and File System Technology for Cluster Environments," Sep. 2001, VERITAS Software Corp. cited by other .
"Windows Clustering Technologies--An Overview," Nov. 2000, Microsoft Corp., www.microsoft.com, last accessed on Dec. 30, 2002. cited by other .
Aguilera et al., "Improving Recoverability in Multi-Tier Storage Systems," International Conference on Dependable Systems and Networks (DSN-2007), Edinburgh, Scotland, Jun. 2007, 10 pages. cited by other .
Anderson et al., "Interposed Request Routing for Scalable Network Storage," ACM Transactions on Computer Systems 20(1):1-24 (Feb. 2002). cited by other .
Anderson et al., "Serverless Network File System," in the 15th Symposium on Operating Systems Principles, Dec. 1995, Association for Computing Machinery, Inc. cited by other .
Apple, Inc. "Tiger Developer Overview Series: Working with Spotlight" Nov. 23, 2004, www.apple.com using www.archive.org <http://web.archive.org/web/20041123005335/developer.apple.com/macosx/- tiger/spotlight.html>, pp. 1-6. cited by other .
Cabrera et al, "Using Data Striping in a Local Area Network," 1992, technical report No. UCSC-CRL-92-09 of the Computer & Information Sciences Department of University of California at Santa Cruz. cited by other .
Cabrera et al., "Swift: A Storage Architecture for Large Objects," Proceedings of the Eleventh IEEE Symposium on Mass Storage Systems, pp. 123-428, Oct. 1991. cited by other .
Cabrera et al., "Swift: Using Distributed Disk Striping to Provide High I/O Data Rates," Computing Systems 4, 4 (Fall 1991), pp. 405-436. cited by other .
Callaghan et al., "NFS Version 3 Protocol Spcification," (RFC 1813), 1995, The Internet Engineering Task Force (IETF), www.ietf.org, last accessed on Dec. 30, 2002. cited by other .
Carns et al., "PVFS: A Parallel File System for Linux Clusters," Proceedings of the 4th Annual Linux Showcase and Conference, pp. 317-327, Atlanta, Georgia, Oct. 2000, USENIX Association. cited by other .
Cavale, M. R., "Introducing Microsoft Cluster Service (MSCS) in the Windows Server 2003," Microsoft Corporation, Nov. 2002. cited by other .
English Translation of Notification of Reason(s) for Refusal for JP 2002-556371 (Dispatch Date: Jan. 22, 2007). cited by other .
Fan, et al., Summary Cache: A Scalable Wide--Area Web Cache Sharing Protocol, Computer Communications Review, Association for Computing Machinery, New York, USA 28(4):254-265 (1998). cited by other .
Farley, "Building Storage Networks," Jan. 2000, McGraw-Hill, ISBN 0072120509. cited by other .
Gibson et al., "File Server Scaling with Network-Attached Secure Disks," in Proceedings of the ACM International Conference on Measurement and Modeling of Computer Systems (Sigmetrics '97), 1997, Association for Computing Machinery, Inc. cited by other .
Gibson et al., "NASD Scalable Storage Systems," Jun. 1999, USENIX99, Extreme Linux Workshop, Monterey, California. cited by other .
Hartman, "The Zebra Striped Network File System," 1994, Ph.D. dissertation submitted in the Graduate Division of the University of California at Berkeley. cited by other .
Haskin et al., "The Tiger Shark File System," 1995, in proceedings of IEEE, Spring COMPCON, Santa Clara, CA, www.research.ibm.com, last accessed on Dec. 30, 2002. cited by other .
Hwang et al., Designing SSI Clusters with Hierarchical Checkpointing and Single I/O Space, IEEE Concurrency, pp. 60-69, Jan.-Mar. 1999. cited by other .
International Search Report for International Patent Application No. PCT/US02/00720 (Jul. 8, 2004). cited by other .
International Search Report for International Patent Application No. PCT/US03/41202 (Sep. 15, 2005). cited by other .
International Search Report for International Patent Application No. PCT/US2008/060449 (Apr. 9, 2008). cited by other .
International Search Report for International Patent Application No. PCT/US2008/064677 (Sep. 6, 2009). cited by other .
International Search Report for International Patent Application No. PCT/US2008/083117 (Jun. 23, 2009). cited by other .
Karamanolis et al., "An Architecture for Scalable and Manageable File Services," HPL-2001-173 p. 1-14 (Jul. 26, 2001). cited by other .
Katsurashima et al., "NAS Switch: A Novel CIFS Server Virtualization," Proceedings. 20th IEEE/11 th NASA Goddard Conference on Mass Storage Systems and Technologies, 2003 (MSST 2003) (Apr. 2003). cited by other .
Kimball, C.E. et al., Automated Client-Side Integration of Distributed Application Servers, 13th LISA Conf. (1999). cited by other .
Kohl et al., "The Kerberos Network Authentication Service (V5)," RFC 1510, Sep. 1993. (http://www.ietf.org/rfc/rfc1510.txt?number=1510. cited by other .
Long et al., "Swift/RAID: A distributed RAID system," Computing Systems, vol. 7, pp. 333-359, Summer 1994. cited by other .
Noghani et al.,"A Novel Approach to reduce Latency on the Internet: `Component-Based Download`, " Proceedings of the Int'l Conf. on Internet Computing, Las Vegas, NV pp. 1-6 (2000). cited by other .
Norton et al., "CIFS Protocol Version CIFS-Spec 0.9," 2001, Storage Networking Industry Association (SNIA), www.snia.org, last accessed on Mar. 26, 2001. cited by other .
Patterson et al., "A case for redundant arrays of inexpensive disks (RAID)," Chicago, Illinois, Jun. 1-3, 1998, in Proceedings of ACM SIGMOD conference on the Management of Data, pp. 109-116, Association for Computing Machinery, Inc., www.acm.org, last accessed on Dec. 20, 2002. cited by other .
Pearson, P.K., "Fast Hashing of Variable-Length Text Strings," Comm. of the ACM, vol. 33, No. 6, Jun. 1990. cited by other .
Peterson, "Introducing Storage Area Networks," Feb. 1998, InfoStor, www.infostor.com, last accessed on Dec. 20, 2002. cited by other .
Preslan et al., "Scalability and Failure Recovery in a Linux Cluster File System," in Proceedings of the 4th Annual Linux Showcase & Conference, Atlanta, Georgia, Oct. 10-14, 2000, www.usenix.org, last accessed on Dec. 20, 2002. cited by other .
Rodriguez et al., "Parallel-access for Mirror Sites in the Internet," InfoCom 2000. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings. IEEE Tel Aviv, Israel Mar. 26-30, 2000, Piscataway, NJ, USA, IEEE, US, Mar. 26, 2000, pp. 864-873, XP010376176 ISBN: 0-7803-5880-5 p. 867, col. 2, last paragraph p. 868, col. 1, paragraph 1. cited by other .
Rsync, "Welcome to the RSYNC Web Pages," Retrieved from the Internet URL: http://samba.anu.edu.au/rsync/ (Retrieved Dec. 18, 2009). cited by other .
Savage, et al., "AFRAID--A Frequently Redundant Array of Inexpensive Disks," 1996 USENIX Technical Conf., San Diego, California, Jan. 22-26, 1996. cited by other .
Soltis et al., The Design and Performance of Shared Disk File System for IRIX, 6th NASA Goddard Space Flight Center Conf. on Mass Storage & Technologies, IEEE Symposium on Mass Storage Systems, p. 1-17 (Mar. 1998). cited by other .
Sorenson, K.M., "Installation and Administration: Kimberlite Cluster Version 1.1.0, Rev. D." Mission Critical Linux, (Dec. 2000) http://oss.missioncriticallinux.com/kimberlite/kimberlite.pdf. cited by other .
Stakutis, "Benefits of SAN-based file system sharing," Jul. 2000, InfoStor, www.infostor.com, last accessed on Dec. 30, 2002. cited by other .
Thekkath et al., "Frangipani: A Scalable Distributed File System," in Proceedings of the 16th ACM Symposium on Operating Systems Principles, Oct. 1997, Association for Computing Machinery, Inc. cited by other .
Wilkes, J., et al., "The HP AutoRAID Hierarchical Storage System," ACM Transactions on Computer Systems, vol. 14, No. 1, Feb. 1996. cited by other .
Zayas, "AFS-3 Programmer's Reference: Architectural Overview," Transarc Corp., version 1.0 of Sep. 2, 1991, doc. No. FS-00-D160. cited by other .
"The AFS File System in Distributed Computing Environment," www.transarc.ibm.com/Library/whitepapers/AFS/afsoverview.html, last accessed on Dec. 20, 2002. cited by other .
Basney et al., "Credential Wallets: A Classification of Credential Repositories Highlighting MyProxy," Sep. 19-21, 2003, pp. 1-20, 31.sup.st Research Conference on Communication, Information and Internet Policy (TPRC 2003), Arlington, Virginia. cited by other .
Botzum, Keys, "Single Sign On--A Contrarian View," Aug. 6, 2001, pp. 1-8, Open Group Website, http://www.opengroup.org/security/topics.htm. cited by other .
Harrison, C., May 19, 2008 response to Communication pursuant to Article 96(2) EPC dated Nov. 9, 2007 in corresponding European patent application No. 02718824.2. cited by other .
Hu, J., Final Office action dated Sep. 21, 2007 for related U.S. Appl. No. 10/336,784. cited by other .
Hu, J., Office action dated Feb. 6, 2007 for related U.S. Appl. No. 10/336,784. cited by other .
Klayman, J., Nov. 13, 2008 e-mail to Japanese associate including instructions for response to office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371. cited by other .
Klayman, J., Response filed by Japanese associate to office action dated Jan. 22, 2007 in corresponding Japanese patent application No. 2002-556371. cited by other .
Klayman, J., Jul. 18, 2007 e-mail to Japanese associate including instructions for response to office action dated Jan. 22, 2007 in corresponding Japanese patent application No. 2002-556371. cited by other .
Korkuzas, V., Communication pursuant to Article 96(2) EPC dated Sep. 11, 2007 in corresponding European patent applicaiton No. 02718824.2-2201. cited by other .
Lelil, S., "Storage Technology News: AutoVirt adds tool to help data migration projects," Feb. 25, 2011, last accessed Mar. 17, 2011, <http://searchstorage.techtarget.com/news/article/0,289142,sid5.sub.--- gci1527986,00.html>. cited by other .
Novotny et al., "An Online Credential Repository for the Grid: MyProxy," 2001, pp. 1-8. cited by other .
Pashalidis et al., "A Taxonomy of Single Sign-On Systems," 2003, pp. 1-16, Royal Holloway, University of London, Egham Surray, TW20, 0EX, United Kingdom. cited by other .
Pashalidis et al., "Impostor: A Single Sign-On System for Use from Untrusted Devices," Global Telecommunications Conference, 2004, GLOBECOM '04, IEEE, Issue Date: Nov. 29-Dec. 3, 2004.Royal Holloway, University of London. cited by other .
Response filed Jul. 6, 2007 to Office action dated Feb. 6, 2007 for related U.S. Appl. No. 10/336,784. cited by other .
Response filed Mar. 20, 2008 to Final Office action dated Sep. 21, 2007 for related U.S. Appl. No. 10/336,784. cited by other .
Soltis et al., "The Global File System," Sep. 17-19, 1996, in Proceedings of the Fifth NASA Goddard Space Flight Center Conference on Mass Storage Systems and Technologies, College Park, Maryland. cited by other .
Tulloch, Mitch, "Microsoft Encyclopedia of Security," 2003, pp. 218, 300-301, Microsoft Press, Redmond, Washington. cited by other .
Uesugi, H., Nov. 26, 2008 amendment filed by Japanese associate in response to office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371. cited by other .
Uesugi, H., English translation of office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371. cited by other .
Uesugi, H., Jul. 15, 2008 letter from Japanese associate reporting office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371. cited by other.

Primary Examiner: Baderman; Scott
Assistant Examiner: Arcos; Jeison C
Attorney, Agent or Firm: LeClairRyan, a Professional Corporation

Parent Case Text



This application claims the benefit of U.S. Provisional Patent Application Ser. No. 61/013,539, filed Dec. 13, 2007, which is herein incorporated by reference in its entirety.
Claims



What is claimed is:

1. A method for generating a unified virtual snapshot, the method comprising: generating, at a file virtualization device, a uniquely identifiable virtual snapshot configuration record identifying each of a plurality of independent data storage systems in a heterogeneous storage network system, wherein at least one of the independent data storage systems is configured to store metadata associated with content stored by one or more of the independent data storage systems; storing, with the file virtualization device, the virtual snapshot configuration record at each of the independent data storage systems; invoking, with the file virtualization device, a capture of a plurality of physical snapshots of each of the independent data storage systems, wherein each of the physical snapshots comprises the virtual snapshot configuration record; mapping, at the file virtualization device, the captured plurality of physical snapshots together to generate the unified virtual snapshot; and storing, with the file virtualization device, the generated unified virtual snapshot.

2. The method as set forth in claim 1 further comprising suspending, with the file virtualization device, data storage related communications between one or more network systems and the plurality of independent data storage systems during at least the invoking step.

3. The method as set forth in claim 2 further comprising resuming, with the file virtualization device, data storage related communications between the one or more network systems and the plurality of independent data storage systems upon the generation of the unified virtual snapshot.

4. The method as set forth in claim 3 wherein the suspending further comprises storing data storage related communications between the one or more network systems and the plurality of data storage systems received during the suspension until data storage related communications are resumed.

5. The method as set forth in claim 4 further comprising: providing, with the file virtualization device, an acknowledgement of completion of at least one of the stored data storage related communications; and completing, with the file virtualization device, the at least one of the stored data storage related communications after the resuming.

6. The method as set forth in claim 1 further comprising recovering, with the file virtualization device, content in at least one of the plurality of independent data storage systems with the generated unified virtual snapshot.

7. The method as set forth in claim 1, wherein the virtual snapshot configuration record is configured to allow locating a particular file or directory in one or more of the independent data storage systems when a format of metadata stored in one of the independent data storage systems is known.

8. A non-transitory computer readable medium having stored thereon instructions for generating a unified virtual snapshot comprising machine executable code which when executed by at least one processor, causes the processor to perform the steps comprising: generating a uniquely identifiable virtual snapshot configuration record identifying each of a plurality of independent data storage systems in a heterogeneous storage network system, wherein at least one of the independent data storage systems is configured to store metadata associated with content stored by one or more of the independent data storage systems; storing the virtual snapshot configuration record at each of the independent data storage systems; invoking a capture of a plurality of physical snapshots of each of the independent data storage systems, wherein each of the physical snapshots comprises the virtual snapshot configuration record; mapping the captured plurality of physical snapshots together to generate the unified virtual snapshot; and storing the generated unified virtual snapshot.

9. The medium as set forth in claim 8 further having stored thereon instructions that when executed by the at least one processor cause the processor to perform steps further comprising suspending data storage related communications between one or more network systems and the plurality of independent data storage systems during at least the invoking step.

10. The medium as set forth in claim 9 further having stored thereon instructions that when executed by the at least one processor cause the processor to perform steps further comprising resuming data storage related communications between the one or more network systems and the plurality of independent data storage systems upon the generation of the unified virtual snapshot.

11. The medium as set forth in claim 10 wherein the suspending further comprises storing data storage related communications between one or more network systems and a plurality of data storage systems received during the suspension until the data storage related communications is resumed.

12. The medium as set forth in claim 11 further having stored thereon instructions that when executed by the at least one processor cause the processor to perform steps further comprising: providing an acknowledgement of completion of at least one of the stored data storage related communications; and completing the at least one of the stored data storage related communications after the resuming.

13. The medium as set forth in claim 8 further having stored thereon instructions that when executed by the at least one processor cause the processor to perform steps further comprising recovering content in at least one of the plurality of independent data storage systems with the generated unified virtual snapshot.

14. The medium as set forth in claim 8 wherein the virtual snapshot configuration record is configured to allow locating a particular file or directory in one or more of the independent data storage systems when a format of metadata stored in one of the independent data storage systems is known.

15. A system that generates a unified virtual snapshot, the system comprising: a plurality of independent data storage systems in a heterogeneous storage network system, wherein at least one of the independent data storage systems is configured to store metadata associated with content stored by one or more of the independent data storage systems; a file virtualization device including at least one of configurable hardware logic configured to be capable of implementing and a processor coupled to a memory and configured to execute programmed instructions stored in the memory comprising: generating a uniquely identifiable virtual snapshot configuration record identifying each of a plurality of independent data storage systems in a heterogeneous storage network system, wherein at least one of the independent data storage systems is configured to store metadata associated with content stored by one or more of the independent data storage systems; storing the virtual snapshot configuration record at each of the independent data storage systems; invoking a capture of a plurality of physical snapshots of each of the independent data storage systems, wherein each of the physical snapshots comprises the virtual snapshot configuration record; mapping the captured plurality of physical snapshots together to generate the unified virtual snapshot and storing the generated unified virtual snapshot.

16. The system as set forth in claim 15 wherein at least one of the configurable hardware logic is further configured to be capable or the processor coupled to the memory is further configured to execute programmed instructions stored in the memory further comprising suspending data storage related communications between one or more network systems and the plurality of independent data storage systems during at least the invoking step.

17. The system as set forth in claim 16 wherein at least one of the configurable hardware logic is further configured to be capable or the processor coupled to the memory is further configured to execute programmed instructions stored in the memory further comprising resuming data storage related communications between one or more network systems and the plurality of independent data storage systems upon the generation of the unified virtual snapshot.

18. The system as set forth in claim 17 wherein at least one of the configurable hardware logic is further configured to be capable or the processor coupled to the memory is further configured to execute programmed instructions stored in the memory further comprising storing data storage related communications between one or more network systems and a plurality of data storage systems received during the suspension until the data storage related communications are resumed.

19. The system as set forth in claim 18 wherein at least one of the configurable hardware logic is further configured to be capable or the processor coupled to the memory is further configured to execute programmed instructions stored in the memory further comprising providing an acknowledgement of completion of at least one of the stored data storage related communications and completing the at least one of the stored data storage related communications after resuming.

20. The system as set forth in claim 15 wherein at least one of the configurable hardware logic is further configured to be capable or the processor coupled to the memory is further configured to execute programmed instructions stored in the memory further comprising recovering content in at least one of the plurality of independent data storage systems with the generated unified virtual snapshot.

21. The system as set forth in claim 15 wherein the virtual snapshot configuration record is configured to allow locating a particular file or directory in one or more of the independent data storage systems when a format of metadata stored in one of the independent data storage systems is known.

22. A file virtualization device, comprising: at least one of configurable hardware logic configured to be capable of implementing or a processor coupled to a memory and configured to execute programmed instructions stored in the memory comprising: generating a uniquely identifiable virtual snapshot configuration record identifying each of a plurality of independent data storage systems in a heterogeneous storage network system, wherein at least one of the independent data storage systems is configured to store metadata associated with content stored by one or more of the independent data storage systems; storing the virtual snapshot configuration record at each of the independent data storage systems; invoking a capture of a plurality of physical snapshots of each of the independent data storage systems, wherein each of the physical snapshots comprises the virtual snapshot configuration record; mapping the captured plurality of physical snapshots together to generate the unified virtual snapshot; and storing the generated unified virtual snapshot.

23. The device as set forth in claim 22, wherein at least one of the configurable hardware logic is further configured to be capable or the processor coupled to the memory is further configured to execute programmed instructions stored in the memory further comprising suspending data storage related communications between one or more network systems and the plurality of independent data storage systems during at least the invoking step.

24. The device as set forth in claim 23, wherein at least one of the configurable hardware logic is further configured to be capable or the processor coupled to the memory is further configured to execute programmed instructions stored in the memory further comprising resuming data storage related communications between one or more network systems and the plurality of independent data storage systems upon the generation of the unified virtual snapshot.

25. The device as set forth in claim 24, wherein at least one of the configurable hardware logic is further configured to be capable or the processor coupled to the memory is further configured to execute programmed instructions stored in the memory further comprising storing data storage related communications between one or more network systems and a plurality of data storage systems received during the suspension until the data storage related communications are resumed.

26. The device as set forth in claim 25, wherein at least one of the configurable hardware logic is further configured to be capable or the processor coupled to the memory is further configured to execute programmed instructions stored in the memory further comprising providing an acknowledgement of completion of at least one of the stored data storage related communications and completing the at least one of the stored data storage related communications after resuming.

27. The device as set forth in claim 22, wherein at least one of the configurable hardware logic is further configured to be capable or the processor coupled to the memory is further configured to execute programmed instructions stored in the memory further comprising recovering content in at least one of the plurality of independent data storage systems with the generated unified virtual snapshot.

28. The device as set forth in claim 22, wherein the virtual snapshot configuration record is configured to allow locating a particular file or directory in one or more of the independent data storage systems when a format of metadata stored in one of the independent data storage systems is known.
Description



FIELD OF THE INVENTION

This invention relates generally to methods and systems for capturing snapshots of file systems and, more particularly, to methods for generating a unified virtual snapshot from a plurality of physical snapshots of a heterogeneous network storage system and systems thereof.

BACKGROUND

Often files and associated data in computer systems are remotely stored on one or more network storage devices. In anticipation of a possible restore request from a user computer system coupled to a network storage device, a physical snapshot of the content in the network storage device may be captured at a recorded time. If the user computer system has a need for and requests a restore, the captured physical snapshot can be used to recover contents from the network storage device as of the recorded time.

File virtualization systems provide methods for managing and presenting a plurality of network storage devices as a single, unified file system. Basically, file virtualization decouples the presentation of a file system from its' physical composition. Unfortunately, when file virtualization is implemented, there is no method or system for generating and providing a unified virtual snapshot in a heterogeneous storage network system.

SUMMARY

A method for generating a unified virtual snapshot in accordance with embodiments of the present invention includes invoking with a file virtualization system a capture of a plurality of physical snapshots. Each of the physical snapshots comprises content at a given point in time in one of the plurality of data storage systems. A unified virtual snapshot is generated with the file virtualization system based on the captured plurality of the physical snapshots.

A computer readable medium having stored thereon instructions for methods for generating a unified virtual snapshot in accordance with other embodiments of the present invention comprising machine executable code which when executed by at least one processor, causes the processor to perform steps including invoking with a file virtualization system a capture of a plurality of physical snapshots. Each of the physical snapshots comprises content at a given point in time in one of the plurality of data storage systems. A unified virtual snapshot is generated with the file virtualization system based on the captured plurality of the physical snapshots.

A system that generates a unified virtual snapshot in accordance with other embodiments of the present invention includes an invocation system and a virtual snapshot system in a file virtualization system. The invocation system invokes a capture of a plurality of physical snapshots. Each of the physical snapshots comprises content in one of the plurality of data storage systems at a given point in time. The virtual snapshot system generates a unified virtual snapshot based on the captured plurality of the physical snapshots.

The present invention provides a number of advantages including providing a unified virtual snapshot from a plurality of physical snapshots of contents of file systems distributed across several independent, network storage devices. Additionally, the present invention provides a method and system which enables the use of snapshots in environments that implement file virtualization. Further, the present invention captures and generates snapshots which can be utilized to re-assemble contents of file systems with or without the file virtualization system.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a block diagram of an example of a system that generates and uses a virtual snapshot from a plurality of physical snapshots of a heterogeneous network storage system;

FIG. 2A is a functional block diagram of an example of a method for processing requests with file virtualization;

FIG. 2B is a flow chart of the example of the method for processing requests with file virtualization illustrated in FIG. 2A;

FIG. 3A is a functional block diagram of an example of a method for generating one or more unified virtual snapshots;

FIG. 3B is a flow chart of the example of the method for generating one or more unified virtual snapshots illustrated in FIG. 3A;

FIG. 4A is a functional block diagram of a method for processing requests with file virtualization after the creation of one or more unified virtual snapshots;

FIG. 4B is a flow chart of the example of the method for processing requests with file virtualization after the creation of one or more unified virtual snapshots illustrated in FIG. 4A;

FIG. 5A is a functional block diagram of the hierarchy of unified virtual snapshots and physical snapshots;

FIG. 5B is a flow chart of the example of the method for recovering content in a heterogeneous storage system;

FIG. 6 is a diagram of an example of a virtual snapshot configuration record; and

FIG. 7 is a diagram of an example of a snapshot command on a network storage device.

DETAILED DESCRIPTION

An example of a system 10 that generates and uses a virtual snapshot of a heterogeneous network storage system is illustrated in FIG. 1, although the present invention can be utilized in homogeneous network storage systems with one or more storage devices. This system 10 includes a client system 12, a file virtualization system 14, data storage systems 16(1) and 16(2), and metadata storage system 18, although this system 10 can include other numbers and types of systems, devices, equipment, parts, components, and/or elements in other configurations. The present invention provides a number of advantages including providing a unified virtual snapshot from a plurality of physical snapshots of contents of file systems distributed across several independent, network storage devices.

Referring more specifically to FIG. 1, the client system 12 utilizes the file virtualization system 14 to conduct one or more operations with one or more of the data storage systems 16(1), 16(2), and 18, such as to store a file, delete a file, create a file, and restore a file by way of example only, although other numbers and types of network systems could be utilizing these resources and other types and numbers of functions could be performed. The client system 12 includes a central processing unit (CPU) or processor, a memory, user input device, a display, and an interface system, and which are coupled together by a bus or other link, although the client system 12 can include other numbers and types of components, parts, devices, systems, and elements in other configurations. The processor in the client system 12 executes a program of stored instructions as described and illustrated herein, although the processor could execute other numbers and types of programmed instructions.

The memory in the client system 12 stores these programmed instructions for one or more aspects of the present invention as described and illustrated herein, although some or all of the programmed instructions could be stored and/or executed elsewhere. A variety of different types of memory storage devices, such as a random access memory (RAM) or a read only memory (ROM) in the system or a floppy disk, hard disk, CD ROM, or other computer readable medium which is read from and/or written to by a magnetic, optical, or other reading and/or writing system that is coupled to one or more processors, can be used for the memory in the client system 12.

The user input device in the client system 12 is used to input selections, such as to store a file, delete a file, create a file, and restore a file, although the user input device could be used to input other types of data and interact with other elements. The user input device can include a computer keyboard and a computer mouse, although other types and numbers of user input devices can be used. The display in the client system 12 is used to display information, such as a file or directory, although other types and amounts of information can be displayed in other manners. The display can include a computer display screen, such as a CRT or LCD screen, although other types and numbers of displays could be used.

The interface system in the client system 12 is used to operatively couple and communicate between the client system 12 and the file virtualization system 14 via a communications network 20, although other types and numbers of communication networks or systems with other types and numbers of configurations and connections to other systems and devices can be used.

The file virtualization system 14 manages file virtualization and the generation of unified virtual snapshots, although other numbers and types of systems can be used and other numbers and types of functions can be performed. The file virtualization system 14 includes a central processing unit (CPU) or processor, a memory, and an interface system which are coupled together by a bus or other link, although other numbers and types of components, parts, devices, systems, and elements in other configurations and locations can be used. The processor in the file virtualization system 14 executes a program of stored instructions for one or more aspects of the present invention as described and illustrated by way of the embodiments herein, such as managing file virtualization and the generation of unified virtual snapshots, although the processor in file virtualization system 14 could execute other numbers and types of programmed instructions.

The memory in the file virtualization system 14 stores these programmed instructions for one or more aspects of the present invention as described and illustrated herein, although some or all of the programmed instructions could be stored and/or executed elsewhere. A variety of different types of memory storage devices, such as a random access memory (RAM) or a read only memory (ROM) in the system or a floppy disk, hard disk, CD ROM, DVD ROM, or other computer readable medium which is read from and/or written to by a magnetic, optical, or other reading and/or writing system that is coupled to one or more processors, can be used for the memory in the file virtualization system 14.

The interface system in the file virtualization system 14 is used to operatively couple and communicate between the file virtualization system 14 and the client system 12, the data storage system 16(1), the data storage system 16(2), and the metadata storage system 18 via the communications networks 20, although other types and numbers of communication networks or systems with other types and numbers of connections and configurations can be used.

Each of the data storage systems 16(1) and 16(2) is a network storage device for files, directories, and other data, although other numbers and types of storage systems which could have other numbers and types of functions and store other data could be used. In this example, data storage system 16(1) is a different type of storage device, e.g. different make and/or model, from the data storage system 16(2) to form a heterogeneous network storage system, although the present invention can work with other numbers and types of storage systems, such as a homogeneous system.

Each of the data storage systems 16(1) and 16(2) include a central processing unit (CPU) or processor, a memory, and an interface system which are coupled together by a bus or other link, although other numbers and types of components, parts, devices, systems, and elements in other configurations can be used. By way of example only, the storage systems may not have their own separate processing capabilities. In this example, the specialized processor in each of the data storage systems 16(1) and 16(2) executes a program of stored instructions for one or more aspects of the present invention as described and illustrated by way of the embodiments herein, such as to capture a physical snapshot by way of example only, although the processor in each of the data storage system could execute other numbers and types of programmed instructions.

The memory in each of the data storage systems 16(1) and 16(2) store these programmed instructions for one or more aspects of the present invention as described and illustrated herein, although some or all of the programmed instructions could be stored and/or executed elsewhere. A variety of different types of memory storage devices, such as a random access memory (RAM) or a read only memory (ROM) in the system or a floppy disk, hard disk, CD ROM, DVD ROM, or other computer readable medium which is read from and/or written to by a magnetic, optical, or other reading and/or writing system that is coupled to one or more processors, can be used for the memory in each of the data storage systems 16(1) and 16(2).

The interface system in the data storage system 16(1) and the interface in the data storage system 16(2) are each used to operatively couple and communicate between the data storage system 16(1) and the file virtualization system 14 and between the data storage system 16(2) and the file virtualization system 14 via communication network 20, although other types and numbers of communication networks or systems with other types and numbers of configurations and connections to other systems and devices can be used.

The metadata storage system 18 is another type of network storage device to store and manage global file virtualization metadata from data storage systems 16(1) and 16(2), although other numbers and types of storage systems which could have other numbers and types of functions, which is connected in other manners, and which could store other types of data and information could be used. In this particular example, the metadata storage system 18 is external to the file virtualization system 14, although the metadata storage 18 could be located in the file virtualization system 14. The metadata storage system 18 includes a central processing unit (CPU) or processor, a memory, and an interface system which are coupled together by a bus or other link, although other numbers and types of components, parts, devices, systems, and elements in other configurations can be used for the storage system. By way of example only, the storage system may not have its own separate processing capabilities. In this example, the specialized processor in the metadata storage system 18 executes a program of stored instructions for one or more aspects of the present invention as described and illustrated by way of the embodiments herein, although the processor in metadata storage system 18 could execute other numbers and types of programmed instructions.

The memory in the metadata storage system 18 stores these programmed instructions for one or more aspects of the present invention as described and illustrated herein, although some or all of the programmed instructions could be stored and/or executed elsewhere. A variety of different types of memory storage devices, such as a random access memory (RAM) or a read only memory (ROM) in the system or a floppy disk, hard disk, CD ROM, DVD ROM, or other computer readable medium which is read from and/or written to by a magnetic, optical, or other reading and/or writing system that is coupled to the processor in the metadata storage system 18.

The interface system in the metadata storage system 18 is used to operatively couple and communicate between the metadata storage system 18 and the file virtualization system 14 via the communications network 20, although other types and numbers of communication networks or systems with other types and numbers of configurations and connections to other systems and devices can be used.

Although embodiments of the client system 12, the file virtualization system 14, the data storage systems 16(1) and 16(2), and the metadata storage system 18 are described herein, each of these systems can be implemented on any suitable computer system or computing device. It is to be understood that the devices and systems of the embodiments described herein are for exemplary purposes, as many variations of the specific hardware and software used to implement the embodiments are possible, as will be appreciated by those skilled in the relevant art(s).

Furthermore, each of the systems of the embodiments may be conveniently implemented using one or more general purpose computer systems, microprocessors, digital signal processors, and micro-controllers, programmed according to the teachings of the embodiments, as described and illustrated herein, and as will be appreciated by those ordinary skill in the art.

In addition, two or more computing systems or devices can be substituted for any one of the systems in any embodiment of the embodiments. Accordingly, principles and advantages of distributed processing, such as redundancy and replication also can be implemented, as desired, to increase the robustness and performance of the devices and systems of the embodiments. The embodiments may also be implemented on computer system or systems that extend across any suitable network using any suitable interface mechanisms and communications technologies, including by way of example only telecommunications in any suitable form (e.g., voice and modem), wireless communications media, wireless communications networks, cellular communications networks, G3 communications networks, Public Switched Telephone Network (PSTNs), Packet Data Networks (PDNs), the Internet, intranets, and combinations thereof.

The embodiments may also be embodied as a computer readable medium having instructions stored thereon for one or more aspects of the present invention as described and illustrated by way of the embodiments herein, as described herein, which when executed by a processor, cause the processor to carry out the steps necessary to implement the methods of the embodiments, as described and illustrated herein.

An overview of the present invention is set forth below. With this example of the present invention, a file virtualization layer (FV1) provided by file virtualization system 14 exists between the application in client system 12 or CL1 and the data storage systems 16(1) and 16(2) (also referred to as DS1 and DS2), although other numbers and types of client systems and storage systems could be used. This file virtualization layer provided by file virtualization system 14 manages metadata in data storage 18 that tracks the location of files and directories that are distributed across data storage systems 16(1) and 16(2) in this particular example.

To generate or create a unified virtual snapshot, file virtualization system 14 erects an I/O barrier to substantially suspend data storage communications between client system 12 and data storage systems 16(1) and 16(2) and metadata storage system 18. This suspension permits either an administrator or application at client system 12 or file virtualization system 14 to request or invoke a capture of physical snapshots of content on data storage systems 16(1) and 16(2) and metadata storage system 18 using an application programming interface (API) or command line interface (CLI), although other manners for invoking a capture of physical snapshots, such as a periodic automated invocation could be used. Once all of the physical snapshots have been captured or otherwise completed by file virtualization system 14, the unified virtual snapshot is generated by file virtualization system 14 and the I/O barrier is removed to allow storage data communications to resume. The unified virtual snapshot comprises the captured physical snapshots which are mapped together by file virtualization system 14 to form the virtual snapshot.

In this example, the I/O barrier is implemented by the file virtualization system 14 at the application protocol level, such as NFS or CIFS by way of example, although the I/O barrier could be implemented in other manners. Packets are accepted at a transport level, such as UDP or TCP by way of example, but are not proxied by file virtualization system 14 to the data storage systems 16(1) and 16(2) and traffic to the metadata storage system 18 is halted while the I/O barrier is asserted. As implemented, the barrier operation is substantially transparent to the operator at the client system 12 and at most the file system seems momentarily slow, although system can be arranged in other manners, such as to provide notice of the implementation of the barrier if desired.

To create a persistent record of the location of files and directories in physical snapshots, once the I/O barrier is asserted and before a physical snapshot occurs the file virtualization system 14 initiates copying or writing of a virtual snapshot configuration record into data storage systems 16(1) and 16(2) and metadata storage system 18. The virtual snapshot configuration record is a unique record written in each of the data storage systems 16(1) and 16(2) and metadata storage system 18 that allows an operator or program to locate components, e.g. a file of a virtual snapshot, although other types and amounts of information could be included. More specifically, the snapshot configuration record records the members of a unified virtual snapshot, i.e. in this particular example the members are data storage systems 16(1) and 16(2) and metadata storage system 18, although the snapshot configuration record can store other types and amounts of data. By way of example only, a virtual snapshot configuration record is illustrated in FIG. 6. In this example, the virtual snapshot configuration records are made unique by a field in the header of each record, although other manners for providing a unique identifier can be used.

The virtual snapshot configuration record is included in the physical snapshots to aid in recovery. With the snapshot configuration record and the stored metadata on the file virtualization system 14, the file virtualization system 14 can locate a particular file or directory. Additionally, by including the virtual snapshot configuration record in the physical snapshot, an external application that knows the format of the stored metadata can use that metadata and the snapshot configuration record to locate a file without file virtualization.

Once a unique virtual snapshot configuration record is copied to each data storage systems 16(1) and 16(2) and metadata storage system 18, the data storage systems 16(1) and 16(2) and metadata storage system 18 are invoked by file virtualization system 14 to capture physical snapshots which will contain this virtual snapshot configuration record, although the data storage systems 16(1) and 16(2) and metadata storage system 18 can be invoked to capture physical snapshots by other systems in other manners. The data storage systems 16(1) and 16(2) and metadata storage system 18 take a physical snapshot in response to this invocation.

By way of example only, a snapshot command which can be used by data storage systems 16(1) and 16(2) and metadata storage system 18 is illustrated in FIG. 7, although other types of commands could be used. Again, this method effectively embeds the unique virtual snapshot configuration record into each of the physical snapshots themselves.

Generation of unified virtual snapshots is implemented by the file virtualization layer in file virtualization system 14, although the generation can be implemented by other systems. Virtual directories are dynamically created that contain a list of available virtual snapshots at different points in time in file virtualization system 14. Each virtual snapshot subdirectory contains files and directories that exist in the physical snapshots of the contents of the file systems on data storage systems 16(1) and 16(2) and metadata storage system 18 in this example.

Referring now to FIGS. 2A and 2B, an example of a method for processing requests with a file virtualization is described below. In step 22, client system 12 (also known as CL1) issues a request CL-REQ-1-1 for a file creation operation of a file `a` to file virtualization system 14 (also known as FV1), although other types and numbers of requests could be issued from other types and numbers of systems.

In step 24, file virtualization system 14 receives the request CL-REQ-1-1 from client system 12. Using the stored metadata, the file virtualization system 14 translates the request CL-REQ-1-1 into a file virtualization request FV-REQ-1-1 which is suitable for execution on data storage system 16(1) (also known as DS1) in which the file is actually located, although other types of requests for other systems could be received.

In step 26, data storage system 16(1) receives the request FV-REQ-1-1 from the file virtualization system 14. In response to the received request FV-REQ-1-1 the data storage system 16(1) performs the creation of file `a` and issues reply DS-RSP-1-1 back to file virtualization system 14, although the data storage system 16(1) could perform other types and numbers of operations based on the received request.

In step 28, file virtualization system 14 receives the reply DS-RSP-1-1 from the data storage system 16(1). In response to the reply DS-RSP-1-1, the file virtualization system 14 generates metadata about the file creation operation and transmits a FV-REQ-1-2 request to metadata storage system 18 (also known as MD1) to record this generated metadata.

In step 30, metadata storage system 18 receives the FV-REQ-1-2 request and stores the generated metadata. Once the FV-REQ-1-2 request is processed, the metadata storage system 18 issues a reply MD-RSP-1-1 to the file virtualization system 14.

In step 32, file virtualization system 14 receives the reply MD-RSP-1-1 from the metadata storage system 18. Next, the file virtualization system 14 using information gathered from the reply MD-RSP-1-1 and the reply DS-RSP-1-1 generates a file virtualization reply FV-RSP-1-1 and issues the reply FV-RSP-1-1 back to client system 12. The file virtualization system 14 also updates the stored file virtualization configuration record to reflect this completed operation.

Referring now to FIGS. 3A and 3B, an example of a method for generating one or more unified virtual snapshots is described below. In step 50, client system 12 issues request CL-REQ-2-1 for a file deletion operation of file `a` to file virtualization system 14, although other types and numbers of requests could be issued from other types and numbers of systems.

In step 52, file virtualization system 14 accepts the request CL-REQ-2-1 from the client system 12, although other types and numbers of requests could be received. Since at this time an I/O barrier is asserted, the file virtualization system 14 performs no action at this time on the request CL-REQ-2-1 from the client system 12, although once the I/O barrier is removed the file virtualization system 14 will process the request.

In step 54, while the I/O barrier is asserted, the file virtualization system 14 generates and transmits a write request WRITE_REQ_2_* to each of the data storage systems 16(1) and 16(2) and the metadata storage system 18 to write the virtualization snapshot configuration record persistent storage, although other types and numbers of requests can be transmitted to other types and numbers of systems. More specifically, in this particular example the file virtualization system 14 generates and transmits a write request WRITE_REQ_2_1 to data storage system 16(1), a write request WRITE_REQ_2_2 to data storage system 16(2), and a WRITE_REQ_2_3 to metadata storage system 18 to each write the virtualization snapshot configuration record in persistent storage.

Once the virtualization snapshot configuration record is written in persistent storage, each of the data storage systems 16(1) and 16(2) and the metadata storage system 18 generates and transmits a response WRITE_RSP_2_* to the file virtualization system 14, although other types and numbers of responses can be transmitted to other types and numbers of systems. More specifically, in this particular example data storage system 16(1) generates and transmits a WRITE_RSP_2_1, the data storage system 16(2) generates and transmits a WRITE_RSP_2_2, and the metadata storage system 18 generates and transmits a WRITE_RSP_2_3 to the file virtualization system 12 once the virtualization snapshot configuration record is written in persistent storage in each storage system.

In step 56, file virtualization system 14 optionally flushes metadata changes, write ahead logs, and any other information required to ensure consistency with the file virtualization metadata snapshot, although the file virtualization system 14 may perform other types and numbers of operations.

In step 58 once the optional flush operations described above in step 56 are completed, the file virtualization system 14 invokes the execution of snapshot operations on data storage systems 16(1) and 16(2) and metadata storage system 18 by generating and transmitting snapshot requests SNAP_REQ_2_*, although the snapshot operations can be invoked in other manners and physical snapshots can be taken in other types and numbers of systems. More specifically, in this particular example file virtualization system 14 generates and transmits request SNAP_REQ_2_1 to data storage systems 16(1) to take a physical snapshot, request SNAP_REQ_2_2 to data storage systems 16(2) to take a physical snapshot, and request SNAP_REQ_2_3 to data storage systems 18 to take a physical snapshot.

In step 60, the data storage systems 16(1) and 16(2) and the metadata storage system 18 each receive and process the requests SNAP_REQ_2_, SNAP_REQ_2_3, and SNAP_REQ_2_3, respectively, to perform a physical snapshot operation to capture a physical snapshot in each of the data storage systems 16(1) and 16(2) and the metadata storage system 18.

Once the physical snapshots have been taken, the data storage systems 16(1) and 16(2) and the metadata storage system 18 each generate and transmit a response SNAP-RSP-2-* when each of the physical snapshots at the data storage systems 16(1) and 16(2) and the metadata storage system 18 have been taken, although other types and numbers of responses can be transmitted to other types and numbers of systems. More specifically, in this particular example data storage system 16(1) generates and transmits a SNAP-RSP-2-1, the data storage system 16(2) generates and transmits a SNAP-RSP-2-2, and the metadata storage system 18 generates and transmits a SNAP-RSP-2-3 to the file virtualization system 12 when each of the physical snapshots at the data storage systems 16(1) and 16(2) and the metadata storage system 18 have been taken.

In step 62, file virtualization system 14 receives completion notifications from data storage systems 16(1) and 16(2) and metadata storage system 18 indicating that the physical snapshots are completed, i.e. the data and metadata are consistent as of the point of time the I/O barrier has been asserted, and then records completion of the unified virtual snapshot. Once all of the responses SNAP-RSP-2-* have been received, the file virtualization system 14, lowers the asserted I/O barrier and processes request CL-REQ-2-1 as well as any other requests.

Referring to FIGS. 4A and 4B, an example of a method for processing requests with a file virtualization after the creation of one or more unified virtual snapshots is described below. In step 70, client system 12 issues request CL-REQ-3-1 for a file deletion operation of file `a` to file virtualization system 14, although other types and numbers of requests could be issued from other types and numbers of systems.

In step 72, file virtualization system 14 receives the request CL-REQ-3-1 from client system 12. Using the stored metadata, the file virtualization system 14 translates the request CL-REQ-3-1 into a file virtualization request FV-REQ-3-1 which is suitable for execution on data storage system 16(1) in which the file is actually located, although other types of requests for other systems could be received.

In step 74, data storage system 16(1) receives the request FV-REQ-3-1 request from the file virtualization system 14. In response to the received request FV-REQ-3-1, the data storage system 16(1) performs the deletion of file `a` and issues reply DS-RSP-3-1 back to file virtualization system 14, although the data storage system 16(1) could perform other types and numbers of operations based on the received request. Although deleted by this operation, file `a` remains in the unified virtual snapshot generated as described with reference to FIGS. 3A and 3B.

In step 76, file virtualization system 14 receives the reply DS-RSP-3-1 from the data storage system 16(1). In response to the reply DS-RSP-3-1, the file virtualization system 14 generates metadata about the file deletion operation and transmits a FV-REQ-3-2 request to metadata storage system 18 to record this generated metadata

In step 78, metadata storage system 18 receives the request FV-REQ-3-2 and updates the metadata stored on metadata storage system 18 to reflect the deletion of file `a`, although other types and numbers of updates could be recorded. Once the FV-REQ-3-2 request is processed, the metadata storage system 18 issues a reply MD-RSP-3-1 to the file virtualization system 14.

In step 80, file virtualization system 14 receives the reply MD-RSP-3-1 from the metadata storage system 18. Next, the file virtualization system 14 using information gathered from the reply MD-RSP-3-1 and the reply DS-RSP-3-1 generates and issues a file virtualization reply FV-RSP-3-1 back to client system 12.

An example of the hierarchy of unified virtual snapshots and physical snapshots is illustrated in the functional block diagram in FIG. 5A and is described below. As set forth in functional block 90, each virtual directory contains a virtual snapshot listing directory (VSLD). As set forth in functional block 92, each VSLD contains a list of virtual snapshots (VSN). As set forth in functional block 94, each VSN is an aggregation of physical snapshots (PSNs). As set forth in functional block 96, each Physical directory contains a physical snapshot listing directory (PSLD). As set forth in functional block 98, each PSLD contains a list of physical snapshots (PSN). As set forth in functional block 100, each PSN is a "point in time" image of the file system, such as of data storage system 16(1), data storage system 16(2), or metadata storage system 18, by way of example only.

An example of a method for recovering content in a heterogeneous storage system is illustrated in FIG. 5B and is described below. In step 120, the client system 12 generates and issues a request CL-REQ-2-1 to file virtualization system 14 to access a file in the unified virtual snapshot, although other types and numbers of requests could be issued.

In step 122, the file virtualization system 14 receives the request CL-REQ-2-1 from client system 12, although other types and numbers of requests could be received. The file virtualization system 14 processes the request CL-REQ-2-1 which includes a marker indicating a traversal of a virtual snapshot listing directory, although in response to the request the file virtualization system 14 could have other types and numbers of indicators. By way of example only, the request could have a marker which indicated a search of the virtual snapshot listing directory was needed to identify the virtual snapshot or the absence of a marker could indicate the need for a search. If a search is indicated by processing the request CL-REQ-2-1, the file virtualization system 14 identifies the virtual snapshot in the virtual snapshot listing based on one or more factors, such as a particular date range in the request, although other manners for identifying the virtual snapshot can be used.

In step 124, the file virtualization system 14 associates request CL-REQ-2-1 with the identified virtual snapshot VSN-1 based on data in the processed request CL-REQ-2-1, such as a specific identification of the virtual snapshot VSN-1, although other manners for identifying the virtual snapshot can be used.

In step 126, based on data in the request CL-REQ-2-1, the file virtualization system 14 determines which of two methods for associating the request CL-REQ-2-1 with one of the captured physical snapshots of data storage system 16(1), data storage system 16(2), and metadata storage system 18 to use, although the file virtualization system 14 could determine which method to use in other manners and could select from other types and numbers of methods In this particular example, one of these methods searches virtualization metadata (cached or persistent) to map the request to a captured physical snapshot and the other method searches the captured physical snapshot for each of the storage systems for the target of the request.

If in step 126 the file virtualization system 14 determines that the method which searches virtualization metadata should be used, then the file virtualization system 14 proceeds to step 128. In step 128, the file virtualization system 14 searches stored virtualization metadata (cached or persistent) to map the target identified in the request CL-REQ-2-1 to one of the captured physical snapshots of one of data storage system 16(1), data storage system 16(2), and metadata storage system 18. Based on the search, the file virtualization system 14 identifies one of these captured physical snapshots, although the file virtualization system can perform other operations based on the result of this search, such as generating and transmitting a message to client system 12 that the request CL-REQ-2-1 can not be completed.

If in step 126 the file virtualization system 14 determines that the method which searches the captured physical snapshots should be used, then the file virtualization system 14 proceeds to step 130. In step 130, the file virtualization system 14 searches the captured physical snapshots for each of the data storage systems 16(1) and 16(2) and metadata storage system 18 for a target identified in the request CL-REQ-2-1. Based on the search, the file virtualization system 14 either identifies one of these captured physical snapshots, although the file virtualization system 14 can perform other operations based on the result of this search, such as generating and transmitting a message to client system 12 that the request CL-REQ-2-1 can not be completed.

In step 132, once the captured physical snapshot has been identified, the file virtualization system 14 translates the request CL-REQ-2-1 in a format suitable for execution on the data storage system 16(1) or data storage system 16(2) from which the identified captured physical snapshot was taken. Once the request CL-REQ-2-1 has been translated, the file virtualization system 14 forwards the translated request CL-REQ-2-1 to the data storage system 16(1) or the data storage system 16(2) from which the identified captured physical snapshot was taken. The data storage system 16(1) or data storage system 16(2) from which the identified captured physical snapshot was taken processes the translated request CL-REQ-2-1, executes any operations, and generates and transmits a response back to the file virtualization system 14, although other types and numbers of operations could be performed based on the received translated request.

In step 134, the file virtualization system 14 translates the response from the data storage system 16(1), data storage system 16(2), or metadata storage system 18 which processed the translated request CL-REQ-2-1 and issues a reply back to the client system 12, although the file virtualization system 14 could perform other types and numbers of operations based on the received response.

Accordingly, as illustrated by the description herein the present invention provides a number of advantages including providing a unified virtual snapshot from a plurality of physical snapshots of contents of file systems distributed across several independent, network storage devices of dissimilar make and model. Additionally, the present invention provides a method and system which enables the use of snapshots in environments that implement file virtualization. Further, the present invention captures and generates snapshots which can be utilized to re-assemble contents of file systems with or without the file virtualization system.

Having thus described the basic concept of the invention, it will be rather apparent to those skilled in the art that the foregoing detailed disclosure is intended to be presented by way of example only, and is not limiting. Various alterations, improvements, and modifications will occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested hereby, and are within the spirit and scope of the invention. Additionally, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes to any order except as may be specified in the claims. Accordingly, the invention is limited only by the following claims and equivalents thereto.

* * * * *

References


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed