US7124143B2 - Data migration in storage system - Google Patents

Data migration in storage system Download PDF

Info

Publication number
US7124143B2
US7124143B2 US10/879,424 US87942404A US7124143B2 US 7124143 B2 US7124143 B2 US 7124143B2 US 87942404 A US87942404 A US 87942404A US 7124143 B2 US7124143 B2 US 7124143B2
Authority
US
United States
Prior art keywords
storage node
target
data
logical unit
controller
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US10/879,424
Other versions
US20060004876A1 (en
Inventor
Naoto Matsunami
Tetsuya Shirogane
Naoko Iwami
Kenta Shiga
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to US11/120,447 priority Critical patent/US7472240B2/en
Priority to US11/234,459 priority patent/US7912814B2/en
Assigned to HITACHI , LTD. reassignment HITACHI , LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MATSUNAMI, NAOTO, SHIGA, KENTA, IWAMI, NAOKO, SHIROGANE, TETSUYA
Publication of US20060004876A1 publication Critical patent/US20060004876A1/en
Application granted granted Critical
Publication of US7124143B2 publication Critical patent/US7124143B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0605Improving or facilitating administration, e.g. storage management by facilitating the interaction with a user or administrator
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0617Improving the reliability of storage systems in relation to availability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0635Configuration or reconfiguration of storage systems by changing the path, e.g. traffic rerouting, path reconfiguration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0637Permissions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2101/00Indexing scheme associated with group H04L61/00
    • H04L2101/60Types of network addresses
    • H04L2101/618Details of network addresses
    • H04L2101/631Small computer system interface [SCSI] addresses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2101/00Indexing scheme associated with group H04L61/00
    • H04L2101/60Types of network addresses
    • H04L2101/618Details of network addresses
    • H04L2101/645Fibre channel identifiers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99942Manipulating data structure, e.g. compression, compaction, compilation

Definitions

  • the present invention relates to a storage system for use in a computer system.
  • the second storage system issues a read request to the first storage system so that data in the first storage system is copied into the second storage system.
  • the second storage system is provided with a copy pointer for recording the completion level of data copying to indicate the progress of data migration.
  • an I/O request issued by the host computer is accepted by the second storage system.
  • the second storage system refers to the copy pointer to see whether data in the request has already be copied to the second storage system. If so, the second storage system forwards the data to the host computer. If not, the second storage system reads the requested data from the first storage system for transfer to the host computer.
  • JP-A-2000-187608 first, the connection between the first storage system and the host computer is terminated to establish another connection between the host computer and the second storage system. Then, data migration is performed from the first storage system to the second storage system. Once connected to the second storage system, the host computer issues an I/O request to the second storage system.
  • JP-A-2000-187608 The concern here is that there is no disclosure in JP-A-2000-187608 about how an access path is changed between the host computer and the corresponding storage system, especially about how to make settings to the second storage system for an access destination of the host computer.
  • the host computer can be allowed to make access to the migration destination under the same conditions as for the migration source. Accordingly, it is desired such transfer is realized.
  • a connection is established over a network among a storage system, a computer, and a name server for managing interrelation between initiators and targets.
  • the storage system includes first and second storage nodes.
  • the first storage node is provided with a first logical unit to which a first target is set.
  • the first target is the one interrelated to a first initiator set to the computer.
  • the second storage node is provided with a second logical unit.
  • the first storage node For data migration from the first logical unit to the second logical unit, the first storage node forwards data stored in the first logical unit to the second storage node, and thus received data is then stored in the second logical unit.
  • the first storage node also forwards information about the first target to the second storage node. With such information, the second storage node then makes a target setting to the second logical unit.
  • the computer Based on an instruction coming from the name server, the computer detects if a target interrelated to its initiator is set to the second storage node. If so, the computer issues an access request toward the second logical unit, and the second storage node receives the request.
  • FIG. 1 is a diagram showing an exemplary structure of a computer system in a first embodiment of the present invention
  • FIG. 2 is a diagram showing an exemplary structure of a storage node
  • FIG. 3 is a diagram showing an exemplary structure of memory provided to the storage node
  • FIGS. 4A and 4B are both a diagram showing an exemplary structure of a logical unit
  • FIGS. 5A to 5D are all a diagram showing an exemplary structure of an LU management table
  • FIG. 6 is a diagram showing an exemplary structure of a name server
  • FIG. 7A is a diagram showing an exemplary name management table during data migration
  • FIG. 7B is a diagram showing another exemplary name management table after data migration
  • FIG. 8 is a schematic diagram showing an exemplary process of migrating data in a logical unit from a storage node to another;
  • FIG. 9 is a flowchart of an exemplary process of, through addition of a new SN to the storage system of the first embodiment, migrating data from an LU of any existing SN to an LU of the newly-added SN;
  • FIG. 10 is a flowchart of an exemplary process of, through addition of a new SN to a network in a second embodiment of the present invention, migrating data from an LU of any existing SN to an LU of the newly-added SN;
  • FIG. 11 is a diagram showing an exemplary system structure in a third embodiment of the present invention.
  • FIG. 12 is a diagram showing an exemplary system structure in a fourth embodiment of the present invention.
  • FIG. 13 is a diagram showing an exemplary system structure in a fifth embodiment of the present invention.
  • FIG. 14 is a diagram showing an exemplary system structure in a sixth embodiment of the present invention.
  • FIG. 15A is a diagram showing an exemplary display screen of a management console 4 having displayed thereon the system structure before data migration;
  • FIG. 15B is a diagram showing another exemplary display screen of the management console 4 having displayed thereon the system structure after data migration;
  • FIG. 15C is a diagram showing still another exemplary display screen of the management console 4 having displayed thereon the interrelation among an LU, a target, and an initiator before data migration;
  • FIG. 15D is a diagram showing still another exemplary display screen of the management console 4 having displayed thereon the interrelation among the LU, the target, and the initiator after data migration.
  • component names and numbers are each provided with a lower-case alphabetic character such as a, b, or c for component distinction among those plurally provided in the same structure. If no such component distinction is required, no alphabetic character is provided to the component numbers.
  • FIG. 1 Exemplary System Structure
  • FIG. 1 is a diagram showing an exemplary system structure in a first embodiment.
  • a computer system includes: a plurality of storage nodes (in the below, simply referred to as SNs) 1 , a plurality of host computers (in the below, hosts) 2 , a network 30 , a switch 3 , a management console 4 , and a name server 5 .
  • the switch 3 is used for establishing a connection over the network 30 among a plurality of network nodes.
  • the network node is the collective expression including the SNs 1 , the hosts 2 , the management console 4 , the name server 5 , and others, all of which are connected to the network 30 .
  • the name server 5 is in charge of name management of the SNs 1 and the hosts 2 , and their logical connections.
  • the management console 4 is provided for managing a storage system 1000 structured by a plurality of SNs 1 .
  • the network 30 is a generic name for the switch 3 and a line for connecting the switch 3 with the hosts 2 , the SNs 1 , the management console 4 , the name server 5 , and others.
  • the network 30 is encircled by a dashed line.
  • the SNs 1 are each provided with a controller (CTL) 10 , and a logical unit (LU) 12 Xx being a logical disk unit to be accessed by the hosts 2 .
  • CTL controller
  • LU logical unit
  • Xx denotes an identification of the corresponding LU
  • X is an integer of 0 or larger
  • x is a small letter of alphabet.
  • the controller 10 exercises control over disks connected to the corresponding SN 1 , and executes access requests coming from the hosts 2 .
  • the hosts 2 are each a computer including a network controller for establishing a connection to a CPU, memory, and the network 30 .
  • the memory includes an initiator management table 2112 , which will be described later.
  • the management console 4 is a computer including a network controller for establishing a connection to a CPU, memory, and the network 30 .
  • the memory stores a structure management program 4122 , an LU management table 1111 ′, an initiator management table 2112 or 1113 , and a target management table 1112 , all of which will be described later.
  • the management console 4 includes input units such as a keyboard and a mouse, and output units such as a display.
  • FIG. 2 is a diagram showing an exemplary hardware structure of the SN 1 .
  • the SN 1 includes the controller (CTL) 10 , and a plurality of disks 120 y to be connected to the CTL 10 through a Fibre Channel 1030 .
  • the CTL 10 exercises control over input/output to/from the disks 120 y.
  • the CTL 10 includes: a CPU 100 exercising control over the SN 1 ; memory 101 ; a network controller 102 for establishing a connection to the network 30 ; an FC controller 103 ; and a bridge 104 .
  • the memory 101 stores control programs to be executed by the CPU 100 and control data, and serves as cache for increase the speed of disk access.
  • the FC controller 103 is provided for controlling the Fibre Channel (FC) 1030 to be connected to the disks 120 y .
  • the bridge 104 exercises control over data or program transfer between the CPU 100 and the memory 101 , data transfer between the network controller 102 and the memory 101 , and data transfer between the FC controller 103 and the memory 101 .
  • FIG. 3 is a diagram showing an exemplary structure of the memory 101 provided in the SN 1 .
  • the memory 101 is structured by a cache region 110 , a control data region 111 , and a control program region 112 .
  • the cache region 110 serves as a disk cache (in the below, simply referred to as cache) for temporarily storing data of the disks 120 y or copies thereof.
  • the control data region 111 is provided for storing various tables and others for reference by the CPU 100 at the time of execution of the control programs.
  • the various tables include a system structure management table 1110 , an LU management table 1111 , a target management table 1112 , and an initiator management table 1113 .
  • the system structure management table 1110 stores structure information about the storage system 1000 that is structured by a plurality of SNs 1 .
  • the LU management table 1111 stores structure information about the LU 12 Xx in the SN 1 .
  • the target management table 1112 stores a target name (in the below, simply referred to as target) being a logical address provided to the LU 12 Xx.
  • the initiator management table 1113 stores an initiator name (in the below, simply refereed to as initiator) being a logical address of an access sources from which the LU 12 Xx is accessed.
  • the target name or initiator name is exemplified by an iSCSI name in any system using the iSCSI protocol, a WWN (World Wide Name) in any FC systems, and others.
  • the target name is not restrictive thereto as long as being a globally unique identifier assigned to an access destination and showing no change after created until deleted. This is applicable also to the initiator name.
  • the target address or the initiator address may be used as information for identifying the access destination or the access source.
  • the target address is exemplified by but not restricted to a Destination ID in any system using the FC protocol
  • the initiator address is exemplified by but not restricted to a Source ID and others in any system using the PC protocol.
  • the target name and the target address are both information used for identification of address destination, and the initiator name and the initiator address are both information used for identification of address source.
  • the target address can be an alternative option for the target name, and the initiator address for the initiator name.
  • the target name and the target address are hereinafter collectively referred to as “target name”, and this is true to the initiator.
  • the control program region 112 is provided for storing the control programs to be executed by the CPU 100 .
  • the control program region 112 stores various programs as follows. That is, an operating system program 1120 serves as a basic program to execute the control programs in the environment; a TCP/IP program 1121 for data transmission and reception over the network 30 using the TCP/IP protocol; an iSCSI control program 1122 for connecting between the hosts 2 and the SNs 1 using the iSCSI protocol; and a target control program 1123 for controlling a target process at the time of access reception from the host 2 being the initiator to the LU 12 Xx being the target of the iSCSI.
  • the target process includes command reception from the host 2 , command interpretation after reception, and others.
  • the various programs further include: a RAID control program 1124 for controlling RAID (Redundant Arrays of Inexpensive Disks) structured by a plurality of disks 120 y of the SN 1 ; a cache control program 1125 for management control of the disk cache formed in the cache region 110 ; a disk control program 1126 for executing a disk control process such as command generation with respect to a single disk 120 y ; an FC control program 1127 for transmission and reception of command and data with the disk 120 y via the FC through control over the FC controller 103 ; an LU control program 1128 for structuring the LU 12 Xx being a logical volume through formation of RAID from the disks 120 y ; a migration program 1129 for executing a migration process for migrating data of the LU 12 Xx among the SNs 1 ; an initiator control program 1130 for controlling the SN 1 to operate as initiator of iSCSI at the time of migration process to forward data of the LU 12 Xx to any other
  • the network 30 is exemplified as an IP network for connection between the hosts 2 and the SNs 1 , the network protocol as the TCP/IP protocol, and the data protocol between the hosts 2 and the SNs 1 as the iSCSI protocol being a block I/O interface.
  • the present invention is not surely restrictive thereto.
  • FIGS. 4A and 4B Exemplary Structure of LU
  • FIGS. 4A and 4B are both a diagram showing an exemplary structure of the LU 12 Xx.
  • the SN 1 in the present embodiment is presumably provided with three disks of 1200 , 1201 , and 1202 .
  • the number of disks 120 y provided to the SN 1 is not restrictive thereto, and any number will do as long as at least one or larger.
  • FIG. 4A is a diagram showing an exemplary structure of a RAID group (in the below, referred also to as RG).
  • the three disks of 1200 , 1201 , and 1202 structure a RAID group 12 of RAID 5 type, and the stripe size thereof is S block.
  • the block means a logical block defined by the SCSI protocol specifications, and a disk sector or 512 bytes is often defined as a logical block.
  • the block size is not restrictive, and surely any other value will do.
  • data is divided on the basis of S block for placement among other disks adjacent to one another.
  • the RAID group (RG) 12 structured as such includes two logical units LU 0 and LU 1 .
  • FIG. 4B is a diagram showing an exemplary structure of a logical unit.
  • the LU 0 ( 120 ) is a logical unit having the capacity of k block
  • the LU 1 ( 121 ) is a logical unit having the capacity of n block.
  • the logical block address (in the below, referred to as RG LBA) for the LU 0 is in a range from 0 to k ⁇ 1, and in a range from k to (k+n ⁇ 1) for the LU 1 .
  • the LUs are each accessed from the hosts 2 using an LBA local to the corresponding LU (Local LBA) so that each LU can behave as if being an independent disk. That is, the Local LBA for the LU 0 ( 120 ) has the address starting from 0 to (k ⁇ 1) being equal to the total capacity ⁇ 1 , and separately therefrom, the Local LBA for the LU 1 ( 121 ) has the address starting from 0 to (n ⁇ 1).
  • LBA Local LBA
  • FIGS. 5A to 5D Exemplary structure of LU Management Table
  • FIGS. 5A to 5D are all a diagram showing an exemplary structure of the LU management table 1111 stored in the memory 101 of the SN 1 .
  • LU denotes an LU number
  • RG denotes identification information of a RAID group having LUs structured therein.
  • Start RG LBA denotes an RG LBA located at the LU head in the RG
  • LEN denotes the LU capacity (unit of which is block)
  • Initiator denotes an initiator name of any initiator allowed to access the corresponding LU, e.g., initiator set to the host
  • Target denotes a target name assigned to the corresponding LU.
  • FIG. 5A shows an exemplary LU management table 1111 a of the SNa ( 1 a ).
  • the LU 0 a is located in the RG 0 a , and having the Start RG LBA of 0, the capacity of k, the initiator allowed to access thereto is the host (Host a) 2 a with the initiator name of Init-a 0 , and the target name of Targ-a 0 .
  • the LU 1 a is located in the RG 0 a , and having the Start RG LBA of k, the capacity of n, the initiator allowed to access thereto is the host (Host b) 2 b with the initiator name of Init-b 0 , and the target name of Targ-a 1 .
  • the LU and the target have a one-to-one relationship, there may be a case where a plurality of initiators are allowed to access a target.
  • the target control program 1123 responsively allows access only to the LU 12 Xx corresponding to the initiator whose initiator name is thus entered.
  • the column of Initiator in the LU management table 1111 is provided with a plurality of entries for registration of a plurality of initiator names.
  • the management console 4 also includes in the memory the LU management table 1111 ′, which is a combination result of the LU management table 1111 each included in the SNs 1 connected to the network 30 . Compared with the LU management table 1111 , the LU management table 1111 ′ is additionally provided with identification information for the corresponding SN 1 as shown in FIG. 15C .
  • FIG. 6 is a diagram showing an exemplary structure of the name server 5 .
  • the name server 5 is provided with: a CPU 500 in charge of control entirely over the name server 5 ; memory 501 for storing control programs to be executed by the CPU 500 and control data; a network controller 502 for connecting to the network 30 ; and a bridge 504 exercising control over data or program transfer between the CPU 500 and the memory 501 , and data transfer between the network controller 502 and the memory 501 .
  • the memory 501 has a control data region 511 , and a control program region 512 .
  • the control data region 511 is provided for storing various tables and others for reference by the CPU 500 when executing the control programs.
  • the control data region 511 stores a name management table 5111 including initiator and target names for iSCSI, and the connection relation between the initiator and the target.
  • the control program region 512 is provided for storing the control programs to be executed by the CPU 500 .
  • the control program region 512 stores various programs as follows. That is, an operating system program 5120 serving as a basic program to execute the control programs in the environment; a TCP/IP program 5121 for data transmission and reception over the network 30 using the TCP/IP protocol; a name management program 5122 in charge of name management of the iSCSI nodes (i.e., hosts 2 and storage nodes SNs 1 ) to be connected over the network 30 , and controlling the interrelation between the initiators and iSCSI nodes; and a communications program 5123 for carrying out communications for name management of initiators (e.g., hosts 2 ) and targets (e.g., SNs 1 ) based on the iSCSI protocol specifications.
  • an operating system program 5120 serving as a basic program to execute the control programs in the environment
  • a TCP/IP program 5121 for data transmission and reception over the network 30 using the TCP/IP protocol
  • the name server 5 is exemplified by an iSNS (iSCSI Name Server) of the iSCSI protocol specifications. This is not surely restrictive, and to realize the present embodiment, any other name server specifications can be used to construct a name server.
  • iSNS iSCSI Name Server
  • FIGS. 7A and 7B Exemplary Structure of Name Management Table
  • FIGS. 7A and 7B are both a diagram showing an exemplary name management table 5111 stored in the memory 501 of the name server 5 .
  • the name management table 5111 includes the initiator management table ( 2112 or 1113 ) and the target management table 1112 .
  • Initiator denotes an initiator name under the management of an entry of the table
  • Entity denotes an identifier specifying to which device the initiator belongs
  • Portal denotes a portal including the initiator
  • PortalGr denotes a portal group including the portal.
  • Target denotes a target name under the management of an entry of the table
  • Initiator denotes an initiator name allowed to access the target
  • Entity denotes an identifier specifying to which device the target belongs
  • Portal denotes a portal including the target
  • PortalGr denotes a portal group including the portal.
  • the initiator management table in the name management table 5111 is the same as the initiator management table stored in the memory of the device having the initiator.
  • the target management table in the name management table 5111 is the same as the target management table stored in the memory of the device having the target.
  • the management console 4 includes, in the memory, the initiator management table and the target management table being the same as those in the name server 5 .
  • initiator management tables 2112 a and 2112 b of FIG. 7A are both an initiator management table for an initiator of the host a( 2 a ) or the host b( 2 b ).
  • the Host a( 2 a ) includes in the memory the initiator management table 2112 a similar to the one shown in FIG. 7A
  • the Host b ( 2 b ) includes in the memory the initiator management table 2112 b similar to the one shown in FIG. 7A .
  • FIG. 7A is an initiator management table for an initiator located in the SNa ( 1 a ), and the SNa ( 1 a ) includes in the memory 101 the initiator management table 1113 similar to the one shown in FIG. 7A .
  • target management tables 1112 a and 1112 b of FIG. 7A are both a target management table for a target of the SNa ( 1 a ) or the SNb ( 1 b ).
  • the SNa ( 1 a ) includes in the memory 101 the target management table 1112 similar as the target management table 1112 a
  • the SNb ( 1 b ) includes in the memory 101 a target management table 1112 similar to the target management table 1112 b.
  • the name server 5 uses the name management table 5111 to collectively manage the initiator management tables of the initiators connected to the network 30 , and the target management tables of the targets connected to the network 30 .
  • FIG. 7A exemplarily shows three pairs of initiator and target.
  • a first pair includes an initiator Init-a 0 and a target Targ-a 0 .
  • the initiator Init-a 0 is located in a portal Ia 0 of the Host a( 2 a ), and belonging to a portal group IPGa 0 .
  • the target Targ-a 0 is located in a portal Ta 0 of the SNa ( 1 a ), and belonging to a portal group TPGa 0 to allow the initiator Init-a 0 to access thereto.
  • a second pair includes an initiator Init-b 0 and a target Targ-a 1 .
  • the initiator Init-b 0 is located in a portal Ib 0 of the Host b( 2 b ), and belonging to a portal group IPGb 0 .
  • the target Targ-a 1 is located in a portal Ta 1 of the SNa ( 1 a ), and belonging to a portal group IPGa 1 to allow the initiator Init-a 0 to access thereto.
  • a third pair includes an initiator Init-SNa 1 and a target Targ-b 0 .
  • the initiator Init-SNa 1 is located in a portal ISNa 1 of the SNa ( 1 a ), and belonging to a portal group IPGSNa 1 .
  • the target Targ-b 0 is located in a portal Tb 0 of the SNb ( 1 b ), and belonging to a portal group IPGb 0 .
  • the portal denotes a logical portal located in the Host 2 or the network controller of the SN 1 , and structured by a pair of an IP address of a physical port and a TCP port number.
  • the portal can be plurally provided if any one specific physical port is provided with a plurality of TCP ports.
  • the portal group includes a plurality of portals as an aggregate to be used as a single communications path. In the below, no mention is made to the portal group except for the group name.
  • the pairs of initiator and target are made between any initiators and targets connected to the network 30 , and managed by the name management table 5111 .
  • Described now is a process of achieving the load balance among the SNs 1 through addition of a new storage node 1 to the storage system 1000 , and through data migration from the LU 12 Xx of any existing storage node 1 to the newly-provided SN 1 .
  • FIG. 8 is a schematic diagram showing, through addition of a new SN 1 to the storage system 1000 , an exemplary process of data migration from the LU 12 Xx of any existing SN 1 to the newly-added SN 1 . Note that FIG. 8 shows the state halfway through the construction process of the system of FIG. 1 .
  • the storage system 1000 does not include the SNb ( 1 b ) but only the SNa ( 1 a ), and includes the Host a( 2 a ) and Host b( 2 b ).
  • the Host a( 2 a ) is making access to an LU 0 a ( 120 a ) of the SNa( 1 a ), and the Host b( 2 b ) is making access to an LU 1 a ( 121 a ) of the SNa ( 1 a ).
  • the Host a( 2 a ) includes an initiator, which is entered to, as the initiator name of Init-a 0 , both the initiator management table 2112 a of the Host a( 2 a ) and the name management table 5111 of the name server 5 .
  • the Host b( 2 b ) includes an initiator, which is entered to, as the initiator name of Init-b 0 , both the initiator management table 2112 b of the Host b( 2 b ) and the name management table 5111 of the name server 5 .
  • the LU 0 a ( 120 a ) of the SNa( 1 a ) is added as the target name of Targ-a 0 to the target management table 1112 of the SNa( 1 a ) and the name management table 5111 of the name server 5 .
  • Also added to the target management table 1112 and the name management table 5111 is Init-a 0 as the initiator allowed to access the target Targ-a 0 .
  • the LU 1 a ( 121 a ) of the SNa( 1 a ) is added as the target name of Targ-a 1 to the target management table 1112 of the SNa( 1 a ) and the name management table 5111 of the name server 5 .
  • Also added to the target management table 1112 and the name management table 5111 is Init-b 0 as the initiator allowed to access the target of Targ-a 1 .
  • FIG. 7A shows the name management table 5111 under such pair making.
  • the target management table 1112 and the name management table 5111 are added with initiators in accordance with the iSCSI protocol specifications. Assuming here is that the Host a( 1 a ) is already operating under the state accessible to the LU 0 a ( 120 a ), and the Host b( 1 b ) under the state accessible to the LU 1 a ( 121 a ). That is, as shown in FIG.
  • the LU management table 1111 in the memory 101 of the SNa( 1 a ) includes Targ-a 0 as the target name of the LU 0 a ( 120 a ), and Init-a 0 as the initiator in the Host a( 1 a ) that is allowed to access the Lu 0 a ( 120 a ).
  • the LU management table 1111 includes Targ-a 1 as the target name of the LU 1 a ( 121 a ), and Init-b 0 as the initiator in the Host b( 1 b ) allowed to access the Lu 1 a ( 121 a ).
  • FIG. 9 is a flowchart of an exemplary process of, through addition of a new SN 1 to the storage system 1000 , migrating data from an LU 12 Xx of any existing SN 1 to an LU 12 Xx of the newly-added SN 1 .
  • the SNb( 1 b ) is connected to the switch 3 to add the SNb( 1 b ) to the storage system 1000 (step 9001 of FIG. 9 ).
  • the SNb( 1 b ) is assumed to have a storage region enough for storage of data in the LU 1 a ( 121 a ) of the SNa( 1 a ).
  • the CPU of the management console 4 goes through the structure management program 4122 to acquire information about the LU 1 a ( 121 a ), which is the destination LU (step 9002 ).
  • the program goes through the process.
  • the structure management program 4122 asks the SNa( 1 a ) for structure information of the LU 1 a ( 121 a ).
  • the LU control program 1128 of the SNa( 1 a ) refers to the LU management table 1111 to forward the applicable structure information of the LU 1 a ( 121 a ) to the management console 4 .
  • the structure information includes information in the LU management table 1111 of the SNa( 1 a ), and information about the RG structure (RAID structure) including the LU 1 a ( 121 a ) structured therein.
  • the structure management program 4122 enters, into the LU management table 1111 ′ stored in its own memory, the information received from the SNa( 1 a ) together with the identification information of the SNa( 1 a ). Then, based on thus received information, the LU 1 a ( 121 a ) is identified as being the LU having the capacity of n block in the RAID group of RAID5 structure.
  • the structure management program 4122 may skip step 9002 if the management console 4 already has information about the SNs 1 in the storage system 1000 , i.e., information in the LU management table 1111 , and the RAID structure of the respective LUs, and if the management console 4 is exercising control over the structure information using its own LU management table 1111 ′.
  • the structure management program 4122 of the management console 4 instructs the SNb( 1 b ) to construct an LU 0 b ( 120 b ) having the same capacity as the LU 1 a ( 121 a ) being the migration source to any appropriate RAID group of the newly added SNb( 1 b ).
  • the RAID group considered appropriate may be the one having the same RAID structure as the LU 1 a ( 121 a ).
  • the structure management program 4122 also instructs the SNb( 1 b ) to set thus newly constructed LU 0 b ( 120 b ) as a target to the portal Tb 0 of the physical port and the portal number designated by the SNb( 1 b ), and the Portal group TPGb 0 .
  • the LU control program 1128 constructs the LU 0 b ( 120 b ) so that a target having the target name of Targ-b 0 is created to the portal Tb 0 and the portal group TPGb 0 . Then, as shown in FIG. 5B , the LU management table 1111 b is added with Targ-b 0 for target name, LU 0 b for LU, RG 0 b for RG, 0 for Start RG LBA, and n for LEN.
  • the communications program 1131 of the SNb( 1 b ) forwards a request to the name server 5 to enter any new target thereto.
  • the name server 5 registers the target management table 1112 b of FIG. 7A to the name management table 5111 as information about the new target.
  • the target management table 1112 b is storing Targ-b 0 for target name, SNb for Entity, Tb 0 for Portal, and TPGb 0 for PortalGroup, and the column of Initiator is vacant, which will be filled in step 9005 that is described later.
  • the target control program 1123 of the SNb( 1 b ) enters, also to the target management table 1112 in its own memory 101 , the same contents as stored in the target management table 1112 b in the name management table 5111 of the name server 5 , i.e., Targ-b 0 for target name, SNb for Entity, Tb 0 for Portal, and TPGb 0 for PortalGroup (step 9003 of FIG. 9 ).
  • the LU 0 b ( 120 b ) is constructed, and the target Targ-b 0 is registered.
  • the construction information about the LU 0 b ( 120 b ) and the contents of the target management table 1112 of the target Targ-b 0 are forwarded from the SNb( 1 b ) to the structure management program 4122 of the management console 4 .
  • the information is also registered into the LU management table 1111 ′ and the target management table 1112 of the management console 4 .
  • the structure information about the LU 0 b ( 120 b ) includes the RAID structure of the RAID group of the LU 0 b ( 120 b ), and the information of the LU 0 b ( 120 b ) entered to the LU management table of the SNb( 1 b ).
  • the structure management program 4122 of the management console 4 instructs the SNa( 1 a ) being the migration source for initiator construction to the portal ISNa 1 having the designated physical portal and port number, and the portal group IPGSNa 1 .
  • the initiator control program 1130 When the SNa( 1 a ) receives such an instruction, the initiator control program 1130 responsively creates an initiator having the initiator name of init-SNa 1 to the portal ISNa 1 , and the portal group IPGSNa 1 . Then, the communications program 1131 asks the name server 5 to enter the resulting initiator thereto.
  • the name server 5 Upon reception of such a request, the name server 5 registers to the name management table 5111 an initiator management table 1113 SNa 1 of FIG. 7A as information about thus newly-constructed initiator.
  • the initiator management table 1113 SNa 1 already has init-SNa 1 for initiator name, SNa for Entity, ISNa 1 for Portal, and IPGSNa 1 for PortalGroup.
  • the initiator control program 1130 of the SNa( 1 a ) enters, also to the initiator management table 1113 in its own memory 101 , the same contents as stored in the initiator management table 1113 SNa 1 in the name management table 5111 of the name server 5 , i.e., init-SNa 1 for initiator name, SNa for Entity, ISNa 1 for Portal, and IPGNa 1 for PortalGroup.
  • the SNa( 1 a ) is through with initiator construction, and the contents of the initiator management table 1113 of the initiator init-SNa 1 are forwarded from the SNa( 1 a ) to the structure management program 4122 of the management console 4 so as to be entered to the initiator management table 1113 of the management console 4 .
  • the structure management program 4122 of the management console 4 issues an instruction towards the SNb( 1 b ) to provide the initiator init-SNa 1 of the SNa( 1 a ) with an access permission for the target Targ-b 0 .
  • the LU control program 1128 After the SNb( 1 b ) receives such an instruction, as shown in FIG. 5B , the LU control program 1128 enters an initiator of Init-SNa 1 to the LU management table 111 b as an initiator for access permission to the target Targ-b 0 , i.e., the LU 0 b . Further, the target control program 1123 of the SNb( 1 b ) enters the initiator of Init-SNa 1 to the target management table 1112 of the target Targ-b 0 as an initiator for access permission to the target Targ-b 0 .
  • the SNb( 1 b ) asks the name server 5 to enter an initiator of Init-SNa 1 to the target management table 1112 b as an initiator allowed to access the target Targ-b 0 .
  • the target management table 1112 b is the one registered into the name management table 5111 in step 9003 .
  • the relation between the initiator Init-SNa 1 and the target Targ-b 0 (LU 0 b ) is established.
  • the initiator of the migration source SN is successfully entered to the target of the migration destination SN.
  • the structure management program 4122 of the management console 4 enters Init-SNa 1 as an initiator allowed to access the target Targ-b 0 .
  • the initiator-target relation under the management of the name server 5 shows some change.
  • the name management program 5122 of the name server 5 issues a State Change Notification (SCN) to the corresponding initiators, i.e., devices such as the hosts 2 and SNs 1 each including an initiator.
  • SCN State Change Notification
  • the initiators received such an SCN go through a process referred to as discovery.
  • discovery the initiators each make an inquiry to the name server 5 whether any change has occurred to the targets accessible thereby, i.e., whether the accessible target(s) have been added or deleted.
  • the name server 5 Upon reception of such an inquiry, the name server 5 responsively makes a search of the name management table 5111 based on the initiator name included in the inquiry. After the search, a response is made about the target management information about any target(s) accessible by the inquiring initiator, i.e., information having been registered in the target management table.
  • step 9006 as for the initiators located in the hosts 2 , no change is observed for the targets accessible by the corresponding initiator. Thus, even if the host 2 goes through discovery, no target change is discovered, and nothing happens.
  • the initiator control program 1130 asks the iSCSI control program 1122 to go through discovery.
  • the iSCSI control program 1122 is notified, by the name server 5 , of a new target Targ-b 0 corresponding to the initiator Init-SNa 1 of the SNa( 1 a ).
  • the initiator control program 1130 of the SNa( 1 a ) instructs the TCP/IP program 1121 to establish any new TCP connection between the TCP port of the SNa( 1 a ) and the TCP port of the SNb( 1 b ).
  • the initiator control program 1130 instructs the iSCSI control program 1122 to go through an iSCSI log-in process to establish a new iSCSI session between the portal ISNa 1 and the portal Tb 0 of the SNb( 1 b ). In this manner, a communications path using iSCSI is established between the SNa( 1 a ) and the SNb( 1 b ).
  • the initiator control program 1130 of the SNa( 1 a ) issues an iSCSI Inquiry command to the target Targ-b 0 of the SNb( 1 b ) to detect an LU 0 b .
  • This allows the SNa( 1 a ) to access the LU 0 b ( 120 b ) of the SNb( 1 b ).
  • the structure management program 4122 of the management console 4 issues an instruction toward the SNa( 1 a ) to migrate data in the LU 1 a ( 121 a ) to the LU 0 b ( 120 b ) of the SNb( 1 b ).
  • the SNa Upon reception of such an instruction, the SNa activates the migration program 1129 .
  • the migration program 1129 uses the TCP session established in step 9006 to check the state of LU 0 b ( 120 b ), and whether the LU 1 a ( 121 a ) and the LU 0 b ( 120 b ) are in the same size or not, for example. Then, the SNb( 1 b ) is notified that migration is now started.
  • the migration program 1129 of the SNa( 1 a ) issues a command to the target control program 1123 .
  • the target control program 1123 reads, to the cache 110 , data of the LU 1 a ( 121 a ) by any appropriate size.
  • the migration program 1129 issues another command to the initiator control program 1130 .
  • the initiator control program 1130 issues an iSCSI writing command to the LU 0 b ( 120 b ) of the SNb( 1 b ) to write the data read to the cache 110 .
  • the SNb( 1 b ) After receiving the writing command and the data, the SNb( 1 b ) stores the data into the cache 110 , and then writes the data thus stored in the cache 110 to the LU 0 b ( 120 b ). By repeating such a procedure, the data in the LU 1 a ( 121 a ) is completely copied into the LU 0 b ( 120 b ) (( 1 ) of FIG. 8 ).
  • the initiator init-b 0 of the Host b( 2 b ) keeps accessing the LU 1 a ( 121 a ) of the SNa( 1 a ), i.e., target Targ-a 1 .
  • the migration program 1129 of the SNa( 1 a ) writes the writing data to the LU 1 a ( 121 a ), and also forwards the writing data to the LU 0 b ( 120 b ) of the SNb( 1 b ). Then, the SNa( 1 a ) reports the Host b( 2 b ) that the writing process is through, i.e., periodical data writing to the LU 0 b ( 120 b ).
  • storage regions storing different data between the migration source LU 1 a ( 121 a ) and the migration destination LU 0 b ( 120 b ) may be managed by the SNa( 1 a ) using a differential bit map.
  • the SNa( 1 a ) makes a registration of a differential bit for any storage region on the differential bit map.
  • the storage region is the one not yet through with data copying from the LU 1 a ( 121 a ) to the LU 0 b ( 120 b ), and the one through with copying but thereafter showing no data coincidence between the LU 1 a ( 121 a ) and the LU 0 b ( 120 b ) due to data update in the LU 1 a ( 121 a ).
  • This update is caused by reception of writing data addressed to the LU 1 a ( 121 a ) from the Host b( 2 b ).
  • the SNa( 1 a ) may write the data stored in the LU 1 a ( 121 a ) to the LU 0 b ( 120 b )after the data copying process is through only for the storage region having been registered with the differential bit. In this manner, the writing data received from the Host b( 2 b ) during the copying process can be copied to the LU 0 b ( 120 b ) being the migration destination.
  • the data in the LU 1 a ( 121 a ) and the data in the LU 0 b ( 120 b ) are to be the same (( 1 ) of FIG. 8 ). This is the end of data copying.
  • the migration program 1129 of the SNa( 1 a ) instructs the LU control program 1128 to refer to the LU management table 1111 so that the target of the LU 1 a ( 121 a ), i.e., Targ-a 1 , and the initiator thereof, i.e., Init-b 0 , are acquired from the LU management table 1111 a of FIG. 5A .
  • the migration program 1129 of the SNa( 1 a ) uses any new or existing TCP connection between the SNa( 1 a ) and the SNb( 1 b ), e.g., the TCP connection established in step 9006 , to transfer information about thus acquired initiators and targets of the LU 1 a ( 121 a ).
  • the migration program 1129 of the SNb( 1 b ) issues an instruction to the LU management program 1128 .
  • the LU management program 1128 responsively enters, to the LU management table 1111 of the LU 0 b ( 120 b ) of FIG. 5C , Targ-a 1 to Target, and Init-b 0 to Initiator. More in detail, the LU management program 1128 enters the target and initiator of the LU 1 a received from the SNa( 1 a ) to the LU management table of the LU 0 b ( 120 b ) to change the target and initiator of the LU 0 b ( 120 b ) to those of the LU 1 a ( 121 a ).
  • the data and the access information, i.e., target and initiator, of the LU 1 a ( 121 a ) of the SNa( 1 a ) are taken over by the LU 0 b ( 120 b ) of the SNb( 1 b ), and this is the end of LU migration.
  • a completion notice is forwarded by the SNb( 1 b ) to the SNa( 1 a ), and by the SNa( 1 a ) to the structure management program 4122 of the management console 4 .
  • the management console 4 Upon reception of the completion notice, the management console 4 enters, also the its own LU management table 1111 ′, Targ-a 1 to the Target of the LU 0 b ( 120 b ), and Init-b 0 to the Initiator thereof.
  • the structure management program 4122 of the management console 4 instructs the SNa( 1 a ) to go through initiator deletion.
  • the SNa( 1 a ) responsively instructs the initiator control program 1130 to cut off the connection between the initiator Init-SNa 1 and the target Targ-b 0 used for data migration, and delete the initiator Init-SNa 1 .
  • the initiator control program 1130 instructs the iSCSI control program 1122 to cut off the session between the initiator Init-SNa 1 and the target Targ-b 0 .
  • the initiator control program 1130 deletes the initiator management table 1113 about the initiator Init-SNa 1 from the memory 101 , and instructs the name server 5 to delete the initiator management table 1113 SNa 1 about the initiator Init-SNa 1 .
  • the initiator Init-SNa 1 is deleted by following, in reverse, steps 9004 and 9005 of initiator registration.
  • the structure management program 4122 of the management console 4 also deletes the initiator management table 1113 of the initiator Init-SNa 1 stored in its own memory.
  • the structure management program 4122 of the management console 4 instructs the SNa( 1 a ) to cut off the session established between the target Targ-a 1 set to the LU 1 a ( 121 a ) being the migration source and the initiator Init-b 0 located in the Host b( 2 b ), and to delete the target Targ-a 1 set to the migration source LU 1 a ( 121 a ).
  • the LU control program 1128 of the SNa( 1 a ) instructed as such then responsively issues an instruction toward the iSCSI control program 1122 to cut off the session between the initiator Init-b 0 of the Host-b( 2 b ) and the target Targ-a 1 of the SNa( 1 a ), and the iSCSI program 1122 responsively executes the instruction.
  • the LU control program 1128 deletes, from the LU management table 1111 a of FIG. 5A , any entry relating to the LU 1 a ( 121 a ).
  • the LU management table in the memory 101 of the SNa( 1 a ) looks like an LU management table 1111 a of FIG. 5D .
  • the SNa( 1 a ) deletes the entry of Targ-a 1 from the target management table 1112 in the memory 101 .
  • the communications program 1131 of the SNa( 1 a ) instructs the name server 5 to delete, also from the name management table 5111 , any entry relating to the target Targ-a 1 in the target management table 1112 .
  • the name server 5 then responsively goes through deletion as instructed (( 2 ) of FIG. 8 ).
  • the structure management program 4122 of the management console 4 deletes any entry relating to the LU 1 a ( 121 a ) from the LU management table 1111 ′ in its own memory, and also deletes the target management table relating to the target Targ-a 1 .
  • the structure management program 4122 of the management console 4 then instructs the SNb( 1 b ) to enter, to the name server 5 , the target Targ-a 1 having been set to the migration destination LU 0 b ( 120 b ) in step 9008 .
  • the communications program 1131 of the SNb( 1 b ) instructed as such notifies, in a similar manner to step 9003 , the name server 5 to change the target name and the initiator name in the target management table 1112 b of the name management table 5111 into target: Targ-a 1 , and initiator: Init-b 0 (( 3 ) of FIG. 8 ).
  • the name management program 5122 of the name server 5 changes the name management table 5111 as notified.
  • the resulting name management table 5111 looks like the one shown in FIG. 7B .
  • the target control program 1123 of the SNb( 1 b ) also applies the same change to be done by the name server 5 . That is, the target management table 1113 stored in the memory 101 of the SNb( 1 b ) is changed similarly. Specifically, in the target management table 1113 , target is changed from Targ-b 0 to Targ-a 1 , and initiator is changed from Init-SNa 1 to Init-b 0 so as to include Target: Targ-a 1 , Initiator: Init-b 0 , Entity: SNb, Portal: Tb 0 , and PortalVr: TPGb 0 .
  • the structure management program 4122 of the management console 4 stores, into its own memory, a new target table 1113 of the target Targ-a 1 , which is including Target: Targ-a 1 , Initiator: Init-b 0 , Entity: SNb, Portal: Tb 0 , and PortalVr: TPGb 0 .
  • the name management program 5122 of the name server 5 issues a State Change Notification (SCN) to the initiators (( 4 ) of FIG. 8 ).
  • SCN State Change Notification
  • the initiators each execute discovery to inquire the name server 5 whether any change has occurred to their own accessible targets.
  • the Host b( 2 b ) After the Host b( 2 b ) receives the SCN, and after an inquiry is issued to the name server 5 through execution of discovery (( 5 ) of FIG. 8 ), the Host b( 2 b ) is notified from the name server 5 of management information about the target Targ-a 1 relating to the initiator Init-b 0 .
  • the management information is the one registered in the target management table 1112 b of the target Targ-a 1 . Accordingly, this tells the Host b( 2 b ) that the target Targ-a 1 relating to the initiator Init-b 0 has moved to the SNb( 1 b ).
  • a TCP/IP program (not shown) of the Host b( 2 b ) establishes a new TCP connection between the TCP port of the Host b( 2 b ) and the TCP port of the SNb( 1 b ).
  • the iSCSI control program (not shown) of the Host b( 2 b ) goes through an iSCSI log-in process to the SNb( 1 b ) to establish a new iSCSI session between the portal Ib 0 of the Host b( 2 b ) and the portal Tb 0 of the SNb( 1 b ).
  • a communications path using iSCSI is established between the Host b( 2 b ) and the SNb( 1 b ), and thus path switching is completed (( 6 ) of FIG. 8 ).
  • the initiator Init-b 0 of the Host b( 2 b ) forwards a writing command and writing data to the target Targ-a 1
  • the SNb( 1 b ) including the target Targ-a 1 receives the command and data.
  • the writing data is thus stored in the LU 0 b ( 120 b ) including the target Targ-a 1 .
  • the LU 0 b ( 120 b ) takes over not only the data but also access information.
  • the access information includes target names of targets set to the LU 1 a ( 121 a ) being the migration source, and initiator names of initiators allowed to access the targets.
  • the Host b( 2 b ) having gone through discovery acknowledges that the target Targ-a 1 corresponding to its initiator init-b 0 is changed in location from SNa( 1 a ) to SNb( 1 b ). That is, the Host b( 2 b ) does not acknowledge that the target has been changed. This is because the target name Targ-a 1 corresponding to the initiator Init-b 0 shows no change even after data migration.
  • the Host 2 can access the same data as long as accessing the target having the same target name.
  • step 9010 If the session is temporarily cut off In step 9010 between the initiator Init-b 0 of the Host b( 2 b ) and the target Targ-a 1 of the SNa( 1 a ), the session from the Host b( 2 b ) is temporarily cut off until a session is established in step 9012 between the initiator Init-b 0 of the Host b( 2 b ) and the target Targ-a 1 of the SNb( 1 b ).
  • the iSCSI command process generally has a retry mechanism, and thus if no command is received by the target, the Host b( 2 b ) continuously retries for duration of 10 seconds.
  • the programs applying control over layers lower to the operating system of the Host b( 2 b ) such as the TCP/IP program and the iSCSI control program acknowledge that the location of the target Targ-a 1 is changed due to data migration as above.
  • the issue here is that, the TCP/IP program and the iSCSI control program establish a TCP connection and an iSCSI session.
  • the operating system of the Host b( 2 b ) does not necessarily have to acknowledge the location of the target as long as the LU is acknowledged as a logical volume.
  • the operating system of the Host b( 2 b ) and the application program operating thereon do not acknowledge that data migration has been executed. That is, data migration can be favorably performed without causing the operating system of the Host 2 and the application program to notice data migration among the SNs 1 .
  • the target name has to be a unique identifier.
  • an exemplary method is described below.
  • a target name is a character string of an appropriate length.
  • An exemplary character string is a combination of various codes and numbers, e.g., a code identifying a manufacturing company, a code identifying a specific organization in the manufacturing company, a code for identifying a storage system, a code for identifying the type of a storage node, a code of a revision of the storage node, a serial number of the storage node, and a sequential number assigned to a target in the storage node.
  • the LU 12 Xx being the migration destination takes over the target name of the LU 12 Xx being the migration source.
  • the target name can be continuously used by the SN 1 being the migration destination after taken over.
  • nonvolatile memory such as Flash memory for the CTL 10 of the storage node 1 for storing the maximum value of the sequential number used at the time providing a target name to the target in the SN 1 .
  • the maximum value of the sequential number is the maximum value of the sequential number already in use.
  • the LU 12 Xx being the migration destination may be provided with any new target name. If this is the case, to the LU 12 Xx being the migration destination, a target name unique to the destination SN 1 can be set using a sequential number of the SN 1 , the serial number of the SN 1 , a revision code of the SN 1 , and others. If any new target name is set to the LU 12 Xx being the destination, the LU control program 1128 of the SNb( 1 b ) enters in step 9008 of FIG.
  • step 9011 the SNb( 1 b ) is required to enter the newly-set target name to the name server 5 .
  • the initiator Init-b 0 of the Host b( 2 b ) detects the new target, enabling the initiator to construct a session with the target.
  • the SN 1 generates target or initiator for registration into the name server 5 .
  • the name server 5 may generate those. If this is the case, the SNs 1 issue an instruction for the name server 5 to enter the target and initiator, and in return, the name server 5 forwards the target and initiator back to the corresponding SN 1 . Then, the SN 1 makes an entry of the target and initiator received by the name server 5 .
  • FIG. 15 shows an exemplary display screen of the management console 4 .
  • the structure management program 4122 of the management console 4 displays on its screen the LU management table 1111 ′, the target management table 1112 , and the initiator management table 2112 or 1113 , all of which are stored in the memory of the management console 4 .
  • FIGS. 15C and 15D both show such a display screen. Specifically, FIG. 15C shows an exemplary display screen before data migration, and FIG. 15D shows an exemplary display screen after data migration.
  • the structure management program 4112 displays on its screen the LU management table 1111 ′, the target management table 1112 , the initiator management table 2112 or 1113 , and pointers therefor.
  • a manager using the management consoler 4 can easily grasp the relationship between the LU and the initiator or the target from the information displayed on the display screen.
  • the structure management program 4112 also displays the system structure on its screen based on the LU management table 1111 ′, the target management table 1112 , and the initiator management table 2112 or 1113 stored in the memory of the management console 4 .
  • FIGS. 15A and 15B both show such a display screen. Specifically, FIG. 15A shows the system structure before data migration, and FIG. 15B shows the system structure after data migration.
  • FIGS. 15A and 15B both show a display screen in a case where the LU-b( 120 b ) being the migration destination takes over the target name set to the LU 1 a ( 121 a ) being the migration source.
  • the target name is taken over from the migration source LU to the migration destination LU, causing the target Targ-a 1 to be changed in location on the display screen before and after data migration.
  • the combination of initiator and target remains the same, i.e., pair of init-a 0 and Targ-a 0 , and pair of init-b 0 and Targ-a 1 .
  • the information displayed on the display screen is updated every time the LU management table 1111 ′, the target management table 1112 , or the initiator management table 2112 or 1113 is updated. Such update is performed responding to an instruction coming from the structure management program to the SNs 1 as described by referring to FIG. 9 , or a notification received by the structure management program from the SNs 1 about any change applied to the system structure.
  • a second embodiment exemplified is the case of migrating data stored in the LU 1 a ( 121 a ) of the SNa( 1 a ) to the SNb( 1 b ), which is newly added.
  • an SNc( 1 c ) is additionally added to the switch 3 , and the LU 0 a ( 120 a ) left in the SNa( 1 a ) is migrated to thus newly-added SNc( 1 c ).
  • the LU 0 a ( 120 a ) with the target Targ-a 0 in the SNa( 1 a ) is connected with the initiator Init-a 0 of the Host a( 2 a ).
  • the initiator-target relationship is different from that in the first embodiment, and the discovery and other processes are to be executed by the Host a( 2 a ).
  • the procedure remains the same that the data in the LU 0 a ( 120 a ) of the SNa( 1 a ) is migrated to the LU 0 c ( 120 c ) of the SNc( 1 c ), the LU 0 c ( 120 c ) being the migration destination takes over the target Targ-a 0 of the LU 0 a ( 120 a ), and the access path is changed between the initiator Init-a 0 and the target Targ-a 0 .
  • the SNa( 1 a ) After completion of such data migration, the SNa( 1 a ) has no LU 12 Xx to be accessed by the Hosts 2 . Accordingly, the SNa( 1 a ) can be removed from the switch 3 , leading to reduction of the SN.
  • the SNa( 1 a ) can be replaced to the SNc( 1 c ) without interrupting access from the Hosts 2 . More in detail, during the process of changing the access path from the Hosts 2 by migrating the data stored in the LU 0 a ( 120 a ) of the SNa ( 1 a ) to the newly-added SNc( 1 c ), the Hosts 2 can be accessible to the data stored in these both LUs.
  • data storage can be achieved over a long period of time as long as data lasts while suppressing cost increase required for system replacement without temporary data saving, and without interrupting data access.
  • FIG. 11 is a diagram showing another exemplary system structure.
  • a third embodiment has differences from the first and second embodiments that the storage node 1 has two controllers of CTL 0 and CTL 1 , and LU 120 x are so structured as to be accessible by these two controllers 10 .
  • the network 30 is provided with two switches of 0(3) and 1(31), and the Hosts 2 and the storage nodes 1 are each connected to these two switches.
  • the wiring between the LU 120 x and CTL 10 , the wiring between the SN 1 and the switch, and the wiring between the Hosts 2 and the switch are all doubly provided. In such a manner, the resulting storage system can be high in reliability.
  • the method for replacing the storage node 1 and the load distribution through LU migration is the same as that in the first and second embodiments.
  • FIG. 12 is a diagram showing another exemplary system structure.
  • the storage system 1000 is provided with a plurality of CTL 10 , and these CTL 10 share the LU 12 Xx via a disk connector 150 .
  • Add-in and removal of the SNs in the first and second embodiments correspond to add-in and removal of the CTL 10 .
  • a CTLc( 10 c ) may be added as a replacement for the out-of-life CTLa( 10 a ), and after thus newly-added CTLc( 10 c ) takes over the LU 12 Xx that was under the control of the CTLa( 10 a ), the CTLa( 10 a ) is removed.
  • the procedure taken for taking over the LU management information in the LU management table 1111 of the CTLa( 10 a ), for taking over the target in the target management table 1112 of the CTLa( 10 a ), and for changing the access path is executed in the same manner as that in the first and second embodiments.
  • the CTLs 10 are each connected to the corresponding LU 12 Xx via the disk connector 150 , thus there is no need for data migration from the LU 12 Xx.
  • the CTLc( 10 c ) is allowed to access the LU 0 ( 120 a ) through the disk connector 150 .
  • exclusive control is to be exercised, and thus the same procedure in the first and second embodiments are to be executed for the CTLc( 10 c ) to take over the LU management information about the LU 0 ( 120 a ) from the CTLa( 10 a ), take over target information set to the LU( 120 a ), i.e., contents of the target management table 1112 about the target, and others.
  • the procedure can skip the data copying process. In this manner, cost efficiency and system change can be swiftly done to a greater degree.
  • FIG. 13 is a diagram showing still another exemplary system structure.
  • the switch 3 and the management console 4 are included in the storage system 1000 .
  • the switch 3 , the management console 4 , and the SN 1 are all components of the storage system 1000 , and the user is provided those as a set.
  • these components are so structured as a unit, providing the user with better manageability.
  • FIG. 14 is a diagram showing still another exemplary system structure.
  • the management console 4 of FIG. 13 is not provided, and the structure management program 4122 in the management console 4 of the above embodiments is provided to the CTL( 10 ) of the respective storage nodes.
  • the structure management program 4122 communicates with other structure management programs 4122 to see what structure change has occurred. Further, prior to structure change, exclusive control is applied to any needed resources. Such a structure eliminates the management console 4 , leading to the storage system with better cost efficiency.
  • the access path from the host is changed after LU data migration is performed. This change may be done in the following order:
  • Migrate LU information target information and initiator access permission information included

Abstract

A storage system is connected to a computer, and a name server exercising control over the interrelation between initiators and targets, and includes a first storage node and a second storage node. The first storage node has a first logical unit to which a first target is set, and the second storage node has a second logical unit. For data migration from the first logical unit to the second logical unit, the first storage node forwards data stored in the first logical unit to the second storage node, and the second storage node stores the data into the second logical unit. The first storage node also forwards information about the first target to the second storage node, and the second storage node sets a target to the second logical unit using the received information about the first target.

Description

CROSS-REFERENCES TO RELATED APPLICATIONS
This application relates to and claims priority from Japanese Patent Application No.JP2004-139306, filed on May 10, 2004, the entire disclosure of which is incorporated herein by reference.
BACKGROUND
The present invention relates to a storage system for use in a computer system.
The data migration technology from a first storage system to a second storage system is described in JP-A-2000-187608.
In JP-A-2000-187608, once connected with a host computer, the second storage system issues a read request to the first storage system so that data in the first storage system is copied into the second storage system. The second storage system is provided with a copy pointer for recording the completion level of data copying to indicate the progress of data migration.
During such data migration, an I/O request issued by the host computer is accepted by the second storage system. In an exemplary case where a read request is issued from the host computer during data migration, the second storage system refers to the copy pointer to see whether data in the request has already be copied to the second storage system. If so, the second storage system forwards the data to the host computer. If not, the second storage system reads the requested data from the first storage system for transfer to the host computer.
SUMMARY
In JP-A-2000-187608, first, the connection between the first storage system and the host computer is terminated to establish another connection between the host computer and the second storage system. Then, data migration is performed from the first storage system to the second storage system. Once connected to the second storage system, the host computer issues an I/O request to the second storage system.
The concern here is that there is no disclosure in JP-A-2000-187608 about how an access path is changed between the host computer and the corresponding storage system, especially about how to make settings to the second storage system for an access destination of the host computer.
At the time of data migration, if information about data access can be transferred from a migration source to a migration destination, the host computer can be allowed to make access to the migration destination under the same conditions as for the migration source. Accordingly, it is desired such transfer is realized.
In view of the above, a connection is established over a network among a storage system, a computer, and a name server for managing interrelation between initiators and targets. The storage system includes first and second storage nodes. The first storage node is provided with a first logical unit to which a first target is set. The first target is the one interrelated to a first initiator set to the computer. The second storage node is provided with a second logical unit.
For data migration from the first logical unit to the second logical unit, the first storage node forwards data stored in the first logical unit to the second storage node, and thus received data is then stored in the second logical unit. The first storage node also forwards information about the first target to the second storage node. With such information, the second storage node then makes a target setting to the second logical unit.
Based on an instruction coming from the name server, the computer detects if a target interrelated to its initiator is set to the second storage node. If so, the computer issues an access request toward the second logical unit, and the second storage node receives the request.
At the time of data migration, not only data, but also information about data access can be migrated from a migration source to a migration destination.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a diagram showing an exemplary structure of a computer system in a first embodiment of the present invention;
FIG. 2 is a diagram showing an exemplary structure of a storage node;
FIG. 3 is a diagram showing an exemplary structure of memory provided to the storage node;
FIGS. 4A and 4B are both a diagram showing an exemplary structure of a logical unit;
FIGS. 5A to 5D are all a diagram showing an exemplary structure of an LU management table;
FIG. 6 is a diagram showing an exemplary structure of a name server;
FIG. 7A is a diagram showing an exemplary name management table during data migration;
FIG. 7B is a diagram showing another exemplary name management table after data migration;
FIG. 8 is a schematic diagram showing an exemplary process of migrating data in a logical unit from a storage node to another;
FIG. 9 is a flowchart of an exemplary process of, through addition of a new SN to the storage system of the first embodiment, migrating data from an LU of any existing SN to an LU of the newly-added SN;
FIG. 10 is a flowchart of an exemplary process of, through addition of a new SN to a network in a second embodiment of the present invention, migrating data from an LU of any existing SN to an LU of the newly-added SN;
FIG. 11 is a diagram showing an exemplary system structure in a third embodiment of the present invention;
FIG. 12 is a diagram showing an exemplary system structure in a fourth embodiment of the present invention;
FIG. 13 is a diagram showing an exemplary system structure in a fifth embodiment of the present invention;
FIG. 14 is a diagram showing an exemplary system structure in a sixth embodiment of the present invention;
FIG. 15A is a diagram showing an exemplary display screen of a management console 4 having displayed thereon the system structure before data migration;
FIG. 15B is a diagram showing another exemplary display screen of the management console 4 having displayed thereon the system structure after data migration;
FIG. 15C is a diagram showing still another exemplary display screen of the management console 4 having displayed thereon the interrelation among an LU, a target, and an initiator before data migration; and
FIG. 15D is a diagram showing still another exemplary display screen of the management console 4 having displayed thereon the interrelation among the LU, the target, and the initiator after data migration.
DESCRIPTION OF THE PREFERRED EMBODIMENTS
In the below, exemplary embodiments of the present invention are described. Note that these embodiments are no more than examples, and the present invention is not restricted thereby.
In the accompanying drawings, component names and numbers are each provided with a lower-case alphabetic character such as a, b, or c for component distinction among those plurally provided in the same structure. If no such component distinction is required, no alphabetic character is provided to the component numbers.
First Embodiment
1. Exemplary System Structure (FIG. 1)
FIG. 1 is a diagram showing an exemplary system structure in a first embodiment.
A computer system includes: a plurality of storage nodes (in the below, simply referred to as SNs) 1, a plurality of host computers (in the below, hosts) 2, a network 30, a switch 3, a management console 4, and a name server 5. The switch 3 is used for establishing a connection over the network 30 among a plurality of network nodes. The network node is the collective expression including the SNs 1, the hosts 2, the management console 4, the name server 5, and others, all of which are connected to the network 30. The name server 5 is in charge of name management of the SNs 1 and the hosts 2, and their logical connections. The management console 4 is provided for managing a storage system 1000 structured by a plurality of SNs 1. Herein, the network 30 is a generic name for the switch 3 and a line for connecting the switch 3 with the hosts 2, the SNs 1, the management console 4, the name server 5, and others. In FIG. 1, the network 30 is encircled by a dashed line.
The SNs 1 are each provided with a controller (CTL) 10, and a logical unit (LU) 12Xx being a logical disk unit to be accessed by the hosts 2. Here, Xx denotes an identification of the corresponding LU, X is an integer of 0 or larger and x is a small letter of alphabet. The controller 10 exercises control over disks connected to the corresponding SN 1, and executes access requests coming from the hosts 2.
The hosts 2 are each a computer including a network controller for establishing a connection to a CPU, memory, and the network 30. The memory includes an initiator management table 2112, which will be described later.
Similarly to the hosts 2, the management console 4 is a computer including a network controller for establishing a connection to a CPU, memory, and the network 30. The memory stores a structure management program 4122, an LU management table 1111′, an initiator management table 2112 or 1113, and a target management table 1112, all of which will be described later. The management console 4 includes input units such as a keyboard and a mouse, and output units such as a display.
2. Exemplary Structure of Storage Node (SN) (FIG. 2)
FIG. 2 is a diagram showing an exemplary hardware structure of the SN 1.
The SN 1 includes the controller (CTL) 10, and a plurality of disks 120 y to be connected to the CTL 10 through a Fibre Channel 1030. The CTL 10 exercises control over input/output to/from the disks 120 y.
The CTL 10 includes: a CPU 100 exercising control over the SN 1; memory 101; a network controller 102 for establishing a connection to the network 30; an FC controller 103; and a bridge 104. Specifically, the memory 101 stores control programs to be executed by the CPU 100 and control data, and serves as cache for increase the speed of disk access. The FC controller 103 is provided for controlling the Fibre Channel (FC) 1030 to be connected to the disks 120 y. The bridge 104 exercises control over data or program transfer between the CPU 100 and the memory 101, data transfer between the network controller 102 and the memory 101, and data transfer between the FC controller 103 and the memory 101.
3. Exemplary Structure of Memory (FIG. 3)
FIG. 3 is a diagram showing an exemplary structure of the memory 101 provided in the SN 1.
The memory 101 is structured by a cache region 110, a control data region 111, and a control program region 112.
To increase the speed of disk access from the hosts, the cache region 110 serves as a disk cache (in the below, simply referred to as cache) for temporarily storing data of the disks 120 y or copies thereof.
The control data region 111 is provided for storing various tables and others for reference by the CPU 100 at the time of execution of the control programs. The various tables include a system structure management table 1110, an LU management table 1111, a target management table 1112, and an initiator management table 1113. Specifically, the system structure management table 1110 stores structure information about the storage system 1000 that is structured by a plurality of SNs 1. The LU management table 1111 stores structure information about the LU 12Xx in the SN 1. The target management table 1112 stores a target name (in the below, simply referred to as target) being a logical address provided to the LU 12Xx. The initiator management table 1113 stores an initiator name (in the below, simply refereed to as initiator) being a logical address of an access sources from which the LU 12Xx is accessed.
Note here that the target name or initiator name is exemplified by an iSCSI name in any system using the iSCSI protocol, a WWN (World Wide Name) in any FC systems, and others. The target name is not restrictive thereto as long as being a globally unique identifier assigned to an access destination and showing no change after created until deleted. This is applicable also to the initiator name. Herein, the target address or the initiator address may be used as information for identifying the access destination or the access source. The target address is exemplified by but not restricted to a Destination ID in any system using the FC protocol, and the initiator address is exemplified by but not restricted to a Source ID and others in any system using the PC protocol. The target name and the target address are both information used for identification of address destination, and the initiator name and the initiator address are both information used for identification of address source. Thus, the target address can be an alternative option for the target name, and the initiator address for the initiator name. In consideration thereof, the target name and the target address are hereinafter collectively referred to as “target name”, and this is true to the initiator.
The control program region 112 is provided for storing the control programs to be executed by the CPU 100. The control program region 112 stores various programs as follows. That is, an operating system program 1120 serves as a basic program to execute the control programs in the environment; a TCP/IP program 1121 for data transmission and reception over the network 30 using the TCP/IP protocol; an iSCSI control program 1122 for connecting between the hosts 2 and the SNs 1 using the iSCSI protocol; and a target control program 1123 for controlling a target process at the time of access reception from the host 2 being the initiator to the LU 12Xx being the target of the iSCSI. Herein, the target process includes command reception from the host 2, command interpretation after reception, and others. The various programs further include: a RAID control program 1124 for controlling RAID (Redundant Arrays of Inexpensive Disks) structured by a plurality of disks 120 y of the SN 1; a cache control program 1125 for management control of the disk cache formed in the cache region 110; a disk control program 1126 for executing a disk control process such as command generation with respect to a single disk 120 y; an FC control program 1127 for transmission and reception of command and data with the disk 120 y via the FC through control over the FC controller 103; an LU control program 1128 for structuring the LU 12Xx being a logical volume through formation of RAID from the disks 120 y; a migration program 1129 for executing a migration process for migrating data of the LU 12Xx among the SNs 1; an initiator control program 1130 for controlling the SN 1 to operate as initiator of iSCSI at the time of migration process to forward data of the LU 12Xx to any other SN 1; and a communications program 1131 for carrying out communications for name management with the name server 5 based on the iSCSI protocol specifications.
In the present embodiment, the network 30 is exemplified as an IP network for connection between the hosts 2 and the SNs 1, the network protocol as the TCP/IP protocol, and the data protocol between the hosts 2 and the SNs 1 as the iSCSI protocol being a block I/O interface. The present invention is not surely restrictive thereto.
4. Exemplary Structure of LU (FIGS. 4A and 4B)
FIGS. 4A and 4B are both a diagram showing an exemplary structure of the LU 12Xx.
The SN 1 in the present embodiment is presumably provided with three disks of 1200, 1201, and 1202. Surely, the number of disks 120 y provided to the SN 1 is not restrictive thereto, and any number will do as long as at least one or larger.
FIG. 4A is a diagram showing an exemplary structure of a RAID group (in the below, referred also to as RG).
The three disks of 1200, 1201, and 1202 structure a RAID group 12 of RAID 5 type, and the stripe size thereof is S block. Herein, the block means a logical block defined by the SCSI protocol specifications, and a disk sector or 512 bytes is often defined as a logical block. The block size is not restrictive, and surely any other value will do. In the RAID group 12, data is divided on the basis of S block for placement among other disks adjacent to one another. A stripe string includes three storage regions locating in each different disk. One of such storage regions stores parity data as a result of exclusive OR calculation from data in other two storage regions. That is,
P0=D0+D1(where + denotes exclusive OR)  Equation 1
The RAID group (RG) 12 structured as such includes two logical units LU0 and LU1. FIG. 4B is a diagram showing an exemplary structure of a logical unit. The LU0 (120) is a logical unit having the capacity of k block, and the LU1 (121) is a logical unit having the capacity of n block. In the RAID group, the logical block address (in the below, referred to as RG LBA) for the LU0 is in a range from 0 to k−1, and in a range from k to (k+n−1) for the LU1. Once LUs are structured, the LUs are each accessed from the hosts 2 using an LBA local to the corresponding LU (Local LBA) so that each LU can behave as if being an independent disk. That is, the Local LBA for the LU0(120) has the address starting from 0 to (k−1) being equal to the total capacity−1, and separately therefrom, the Local LBA for the LU1(121) has the address starting from 0 to (n−1).
5. Exemplary structure of LU Management Table (FIGS. 5A to 5D)
FIGS. 5A to 5D are all a diagram showing an exemplary structure of the LU management table 1111 stored in the memory 101 of the SN 1. In the table, LU denotes an LU number, and RG denotes identification information of a RAID group having LUs structured therein. Further, Start RG LBA denotes an RG LBA located at the LU head in the RG, LEN denotes the LU capacity (unit of which is block), Initiator denotes an initiator name of any initiator allowed to access the corresponding LU, e.g., initiator set to the host, and Target denotes a target name assigned to the corresponding LU.
FIG. 5A shows an exemplary LU management table 1111 a of the SNa (1 a). The LU0 a is located in the RG0 a, and having the Start RG LBA of 0, the capacity of k, the initiator allowed to access thereto is the host (Host a) 2 a with the initiator name of Init-a0, and the target name of Targ-a0. Similarly, the LU1 a is located in the RG0 a, and having the Start RG LBA of k, the capacity of n, the initiator allowed to access thereto is the host (Host b) 2 b with the initiator name of Init-b0, and the target name of Targ-a1.
Herein, although the LU and the target have a one-to-one relationship, there may be a case where a plurality of initiators are allowed to access a target. Once the LU management table is added with an initiator name into the column of Initiator, the target control program 1123 responsively allows access only to the LU 12Xx corresponding to the initiator whose initiator name is thus entered. When a plurality of initiators are allowed to access any one specific LU 12Xx, the column of Initiator in the LU management table 1111 is provided with a plurality of entries for registration of a plurality of initiator names. If there is no access limitation for the LU 12Xx, i.e., if every initiator is allowed to access the LU 12Xx, no name is entered into the column of Initiator corresponding to the LU 12Xx (enter NULL). The details of interrelation between the initiator name and the target name are left for later description.
The management console 4 also includes in the memory the LU management table 1111′, which is a combination result of the LU management table 1111 each included in the SNs 1 connected to the network 30. Compared with the LU management table 1111, the LU management table 1111′ is additionally provided with identification information for the corresponding SN 1 as shown in FIG. 15C.
6. Exemplary structure of Name Server (FIG. 6)
FIG. 6 is a diagram showing an exemplary structure of the name server 5. The name server 5 is provided with: a CPU 500 in charge of control entirely over the name server 5; memory 501 for storing control programs to be executed by the CPU 500 and control data; a network controller 502 for connecting to the network 30; and a bridge 504 exercising control over data or program transfer between the CPU 500 and the memory 501, and data transfer between the network controller 502 and the memory 501.
The memory 501 has a control data region 511, and a control program region 512.
The control data region 511 is provided for storing various tables and others for reference by the CPU 500 when executing the control programs. The control data region 511 stores a name management table 5111 including initiator and target names for iSCSI, and the connection relation between the initiator and the target.
The control program region 512 is provided for storing the control programs to be executed by the CPU 500. The control program region 512 stores various programs as follows. That is, an operating system program 5120 serving as a basic program to execute the control programs in the environment; a TCP/IP program 5121 for data transmission and reception over the network 30 using the TCP/IP protocol; a name management program 5122 in charge of name management of the iSCSI nodes (i.e., hosts 2 and storage nodes SNs 1) to be connected over the network 30, and controlling the interrelation between the initiators and iSCSI nodes; and a communications program 5123 for carrying out communications for name management of initiators (e.g., hosts 2) and targets (e.g., SNs 1) based on the iSCSI protocol specifications.
In the present embodiment, the name server 5 is exemplified by an iSNS (iSCSI Name Server) of the iSCSI protocol specifications. This is not surely restrictive, and to realize the present embodiment, any other name server specifications can be used to construct a name server.
7. Exemplary Structure of Name Management Table (FIGS. 7A and 7B)
FIGS. 7A and 7B are both a diagram showing an exemplary name management table 5111 stored in the memory 501 of the name server 5. The name management table 5111 includes the initiator management table (2112 or 1113) and the target management table 1112.
In the initiator management table 2112 of FIGS. 7A and 7B, Initiator denotes an initiator name under the management of an entry of the table, Entity denotes an identifier specifying to which device the initiator belongs, Portal denotes a portal including the initiator, and PortalGr denotes a portal group including the portal.
In the target management table 1112 of FIGS. 7A and 7B, Target denotes a target name under the management of an entry of the table, Initiator denotes an initiator name allowed to access the target, Entity denotes an identifier specifying to which device the target belongs, Portal denotes a portal including the target, and PortalGr denotes a portal group including the portal.
Note that the initiator management table in the name management table 5111 is the same as the initiator management table stored in the memory of the device having the initiator. Similarly, the target management table in the name management table 5111 is the same as the target management table stored in the memory of the device having the target. Further, the management console 4 includes, in the memory, the initiator management table and the target management table being the same as those in the name server 5.
For example, initiator management tables 2112 a and 2112 b of FIG. 7A are both an initiator management table for an initiator of the host a(2 a) or the host b(2 b). The Host a(2 a) includes in the memory the initiator management table 2112 a similar to the one shown in FIG. 7A, and the Host b (2 b) includes in the memory the initiator management table 2112 b similar to the one shown in FIG. 7A. Similarly, the initiator management table 1113 of FIG. 7A is an initiator management table for an initiator located in the SNa (1 a), and the SNa (1 a) includes in the memory 101 the initiator management table 1113 similar to the one shown in FIG. 7A. Further, target management tables 1112 a and 1112 b of FIG. 7A are both a target management table for a target of the SNa (1 a) or the SNb (1 b). The SNa (1 a) includes in the memory 101 the target management table 1112 similar as the target management table 1112 a, and the SNb (1 b) includes in the memory 101 a target management table 1112 similar to the target management table 1112 b.
As is known from the above, the name server 5 uses the name management table 5111 to collectively manage the initiator management tables of the initiators connected to the network 30, and the target management tables of the targets connected to the network 30.
Refer back to FIG. 7A, which exemplarily shows three pairs of initiator and target.
A first pair includes an initiator Init-a0 and a target Targ-a0. The initiator Init-a0 is located in a portal Ia0 of the Host a(2 a), and belonging to a portal group IPGa0. The target Targ-a0 is located in a portal Ta0 of the SNa (1 a), and belonging to a portal group TPGa0 to allow the initiator Init-a0 to access thereto.
A second pair includes an initiator Init-b0 and a target Targ-a1. The initiator Init-b0 is located in a portal Ib0 of the Host b(2 b), and belonging to a portal group IPGb0. The target Targ-a1 is located in a portal Ta1 of the SNa (1 a), and belonging to a portal group IPGa1 to allow the initiator Init-a0 to access thereto.
A third pair includes an initiator Init-SNa1 and a target Targ-b0. The initiator Init-SNa1 is located in a portal ISNa1 of the SNa (1 a), and belonging to a portal group IPGSNa1. The target Targ-b0 is located in a portal Tb0 of the SNb (1 b), and belonging to a portal group IPGb0.
Herein, the portal denotes a logical portal located in the Host 2 or the network controller of the SN 1, and structured by a pair of an IP address of a physical port and a TCP port number. The portal can be plurally provided if any one specific physical port is provided with a plurality of TCP ports. The portal group includes a plurality of portals as an aggregate to be used as a single communications path. In the below, no mention is made to the portal group except for the group name.
The pairs of initiator and target are made between any initiators and targets connected to the network 30, and managed by the name management table 5111.
8. Exemplary SN Add-In and LU Migration Process
Described now is a process of achieving the load balance among the SNs 1 through addition of a new storage node 1 to the storage system 1000, and through data migration from the LU 12Xx of any existing storage node 1 to the newly-provided SN 1.
FIG. 8 is a schematic diagram showing, through addition of a new SN 1 to the storage system 1000, an exemplary process of data migration from the LU 12Xx of any existing SN 1 to the newly-added SN 1. Note that FIG. 8 shows the state halfway through the construction process of the system of FIG. 1.
Assuming here is that, as the first stage, the storage system 1000 does not include the SNb (1 b) but only the SNa (1 a), and includes the Host a(2 a) and Host b(2 b).
The Host a(2 a) is making access to an LU0 a(120 a) of the SNa(1 a), and the Host b(2 b) is making access to an LU1 a(121 a) of the SNa (1 a).
The Host a(2 a) includes an initiator, which is entered to, as the initiator name of Init-a0, both the initiator management table 2112 a of the Host a(2 a) and the name management table 5111 of the name server 5. Similarly, the Host b(2 b) includes an initiator, which is entered to, as the initiator name of Init-b0, both the initiator management table 2112 b of the Host b(2 b) and the name management table 5111 of the name server 5.
The LU0 a(120 a) of the SNa(1 a) is added as the target name of Targ-a0 to the target management table 1112 of the SNa(1 a) and the name management table 5111 of the name server 5. Also added to the target management table 1112 and the name management table 5111 is Init-a0 as the initiator allowed to access the target Targ-a0. Similarly, the LU1 a(121 a) of the SNa(1 a) is added as the target name of Targ-a1 to the target management table 1112 of the SNa(1 a) and the name management table 5111 of the name server 5. Also added to the target management table 1112 and the name management table 5111 is Init-b0 as the initiator allowed to access the target of Targ-a1.
As such, two pairs of Init-a0 and Targ-a0, and Init-b0 and Targ-a1 are made. FIG. 7A shows the name management table 5111 under such pair making. The target management table 1112 and the name management table 5111 are added with initiators in accordance with the iSCSI protocol specifications. Assuming here is that the Host a(1 a) is already operating under the state accessible to the LU0 a(120 a), and the Host b(1 b) under the state accessible to the LU1 a(121 a). That is, as shown in FIG. 5A, the LU management table 1111 in the memory 101 of the SNa(1 a) includes Targ-a0 as the target name of the LU0 a(120 a), and Init-a0 as the initiator in the Host a(1 a) that is allowed to access the Lu0 a(120 a). Similarly, the LU management table 1111 includes Targ-a1 as the target name of the LU1 a(121 a), and Init-b0 as the initiator in the Host b(1 b) allowed to access the Lu1 a(121 a).
By referring to FIGS. 8 and 9, described next is a process of data migration from the LU1 a(121 a) to the SNb(1 b) newly added to the storage system 1000 due to overloaded SNa(1 a), for example. FIG. 9 is a flowchart of an exemplary process of, through addition of a new SN 1 to the storage system 1000, migrating data from an LU 12Xx of any existing SN 1 to an LU 12Xx of the newly-added SN 1.
9. Add-In of Storage Node SNb (step 9001 of FIG. 9)
First, the SNb(1 b) is connected to the switch 3 to add the SNb(1 b) to the storage system 1000 (step 9001 of FIG. 9). The SNb(1 b) is assumed to have a storage region enough for storage of data in the LU1 a(121 a) of the SNa(1 a).
10. Study of Migration Source LU (step 9002 of FIG. 9)
The CPU of the management console 4 goes through the structure management program 4122 to acquire information about the LU1 a(121 a), which is the destination LU (step 9002). In the below, when a process is executed by the CPU going through any corresponding program, simply referred to as “the program goes through the process”.
To be specific, the structure management program 4122 asks the SNa(1 a) for structure information of the LU1 a(121 a). In response to such a request, the LU control program 1128 of the SNa(1 a) refers to the LU management table 1111 to forward the applicable structure information of the LU1 a(121 a) to the management console 4. The structure information includes information in the LU management table 1111 of the SNa(1 a), and information about the RG structure (RAID structure) including the LU1 a(121 a) structured therein. The structure management program 4122 enters, into the LU management table 1111′ stored in its own memory, the information received from the SNa(1 a) together with the identification information of the SNa(1 a). Then, based on thus received information, the LU1 a(121 a) is identified as being the LU having the capacity of n block in the RAID group of RAID5 structure.
Herein, the structure management program 4122 may skip step 9002 if the management console 4 already has information about the SNs 1 in the storage system 1000, i.e., information in the LU management table 1111, and the RAID structure of the respective LUs, and if the management console 4 is exercising control over the structure information using its own LU management table 1111′.
11. Construction of Migration Destination LU and Target Registration (step 9003 of FIG. 9)
Next, the structure management program 4122 of the management console 4 instructs the SNb(1 b) to construct an LU0 b(120 b) having the same capacity as the LU1 a(121 a) being the migration source to any appropriate RAID group of the newly added SNb(1 b). Here, the RAID group considered appropriate may be the one having the same RAID structure as the LU1 a(121 a).
The structure management program 4122 also instructs the SNb(1 b) to set thus newly constructed LU0 b(120 b) as a target to the portal Tb0 of the physical port and the portal number designated by the SNb(1 b), and the Portal group TPGb0.
When the SNb(1 b) receives such an instruction, the LU control program 1128 constructs the LU0 b(120 b) so that a target having the target name of Targ-b0 is created to the portal Tb0 and the portal group TPGb0. Then, as shown in FIG. 5B, the LU management table 1111 b is added with Targ-b0 for target name, LU0 b for LU, RG0 b for RG, 0 for Start RG LBA, and n for LEN.
The communications program 1131 of the SNb(1 b) forwards a request to the name server 5 to enter any new target thereto. Upon reception of such a request, the name server 5 registers the target management table 1112 b of FIG. 7A to the name management table 5111 as information about the new target. At this point, the target management table 1112 b is storing Targ-b0 for target name, SNb for Entity, Tb0 for Portal, and TPGb0 for PortalGroup, and the column of Initiator is vacant, which will be filled in step 9005 that is described later.
The target control program 1123 of the SNb(1 b) enters, also to the target management table 1112 in its own memory 101, the same contents as stored in the target management table 1112 b in the name management table 5111 of the name server 5, i.e., Targ-b0 for target name, SNb for Entity, Tb0 for Portal, and TPGb0 for PortalGroup (step 9003 of FIG. 9).
In the above manner, by the SNb(1 b), the LU0 b(120 b) is constructed, and the target Targ-b0 is registered. The construction information about the LU0 b(120 b) and the contents of the target management table 1112 of the target Targ-b0 are forwarded from the SNb(1 b) to the structure management program 4122 of the management console 4. In this manner, the information is also registered into the LU management table 1111′ and the target management table 1112 of the management console 4. Here, the structure information about the LU0 b(120 b) includes the RAID structure of the RAID group of the LU0 b(120 b), and the information of the LU0 b(120 b) entered to the LU management table of the SNb(1 b).
12. Construction of Initiator to Migration Source SN (step 9004 of FIG. 9)
Next, the structure management program 4122 of the management console 4 instructs the SNa(1 a) being the migration source for initiator construction to the portal ISNa1 having the designated physical portal and port number, and the portal group IPGSNa1.
When the SNa(1 a) receives such an instruction, the initiator control program 1130 responsively creates an initiator having the initiator name of init-SNa1 to the portal ISNa1, and the portal group IPGSNa1. Then, the communications program 1131 asks the name server 5 to enter the resulting initiator thereto.
Upon reception of such a request, the name server 5 registers to the name management table 5111 an initiator management table 1113SNa1 of FIG. 7A as information about thus newly-constructed initiator. The initiator management table 1113SNa1 already has init-SNa1 for initiator name, SNa for Entity, ISNa1 for Portal, and IPGSNa1 for PortalGroup.
Here, the initiator control program 1130 of the SNa(1 a) enters, also to the initiator management table 1113 in its own memory 101, the same contents as stored in the initiator management table 1113SNa1 in the name management table 5111 of the name server 5, i.e., init-SNa1 for initiator name, SNa for Entity, ISNa1 for Portal, and IPGNa1 for PortalGroup.
In the above manner, the SNa(1 a) is through with initiator construction, and the contents of the initiator management table 1113 of the initiator init-SNa1 are forwarded from the SNa(1 a) to the structure management program 4122 of the management console 4 so as to be entered to the initiator management table 1113 of the management console 4.
13. Initiator Registration of Migration Source SN to Target of Migration Destination SN (step 9005 of FIG. 9)
Next, the structure management program 4122 of the management console 4 issues an instruction towards the SNb(1 b) to provide the initiator init-SNa1 of the SNa(1 a) with an access permission for the target Targ-b0.
After the SNb(1 b) receives such an instruction, as shown in FIG. 5B, the LU control program 1128 enters an initiator of Init-SNa1 to the LU management table 111 b as an initiator for access permission to the target Targ-b0, i.e., the LU0 b. Further, the target control program 1123 of the SNb(1 b) enters the initiator of Init-SNa1 to the target management table 1112 of the target Targ-b0 as an initiator for access permission to the target Targ-b0.
Then, the SNb(1 b) asks the name server 5 to enter an initiator of Init-SNa1 to the target management table 1112 b as an initiator allowed to access the target Targ-b0. Here, the target management table 1112 b is the one registered into the name management table 5111 in step 9003. In this manner, on the name management table 5111 of the name server 5, the relation between the initiator Init-SNa1 and the target Targ-b0(LU0 b) is established.
As such, the initiator of the migration source SN is successfully entered to the target of the migration destination SN.
Here, also to the LU management table 1111′ in the memory and the target management table 1112 of the target Targ-b0, the structure management program 4122 of the management console 4 enters Init-SNa1 as an initiator allowed to access the target Targ-b0.
14. Execution of Discovery (step 9006 of FIG. 9)
Through registration of a new pair of initiator and target to the name management table 5111 of the name server 5 in step 9005, the initiator-target relation under the management of the name server 5 shows some change. To deal with such a change, the name management program 5122 of the name server 5 issues a State Change Notification (SCN) to the corresponding initiators, i.e., devices such as the hosts 2 and SNs 1 each including an initiator. The initiators received such an SCN go through a process referred to as discovery. During discovery, the initiators each make an inquiry to the name server 5 whether any change has occurred to the targets accessible thereby, i.e., whether the accessible target(s) have been added or deleted. Upon reception of such an inquiry, the name server 5 responsively makes a search of the name management table 5111 based on the initiator name included in the inquiry. After the search, a response is made about the target management information about any target(s) accessible by the inquiring initiator, i.e., information having been registered in the target management table.
In step 9006, as for the initiators located in the hosts 2, no change is observed for the targets accessible by the corresponding initiator. Thus, even if the host 2 goes through discovery, no target change is discovered, and nothing happens.
On the other hand, after the SNa(1 a) receives the SCN, the initiator control program 1130 asks the iSCSI control program 1122 to go through discovery. As a result, the iSCSI control program 1122 is notified, by the name server 5, of a new target Targ-b0 corresponding to the initiator Init-SNa1 of the SNa(1 a).
In response thereto, the initiator control program 1130 of the SNa(1 a) instructs the TCP/IP program 1121 to establish any new TCP connection between the TCP port of the SNa(1 a) and the TCP port of the SNb(1 b).
Then, the initiator control program 1130 instructs the iSCSI control program 1122 to go through an iSCSI log-in process to establish a new iSCSI session between the portal ISNa1 and the portal Tb0 of the SNb(1 b). In this manner, a communications path using iSCSI is established between the SNa(1 a) and the SNb(1 b).
Next, the initiator control program 1130 of the SNa(1 a) issues an iSCSI Inquiry command to the target Targ-b0 of the SNb(1 b) to detect an LU0 b. This allows the SNa(1 a) to access the LU0 b(120 b) of the SNb(1 b).
15. Execution of LU Migration (step 9007 of FIG. 9)
The structure management program 4122 of the management console 4 issues an instruction toward the SNa(1 a) to migrate data in the LU1 a(121 a) to the LU0 b(120 b) of the SNb(1 b).
Upon reception of such an instruction, the SNa activates the migration program 1129. Using the TCP session established in step 9006, the migration program 1129 communicates with the migration program 1129 of the SNb(1 b) under any specific protocol to check the state of LU0 b(120 b), and whether the LU1 a(121 a) and the LU0 b(120 b) are in the same size or not, for example. Then, the SNb(1 b) is notified that migration is now started.
Then, the migration program 1129 of the SNa(1 a) issues a command to the target control program 1123. In response thereto, the target control program 1123 reads, to the cache 110, data of the LU1 a(121 a) by any appropriate size. The migration program 1129 issues another command to the initiator control program 1130. In response, the initiator control program 1130 issues an iSCSI writing command to the LU0 b(120 b) of the SNb(1 b) to write the data read to the cache 110. After receiving the writing command and the data, the SNb(1 b) stores the data into the cache 110, and then writes the data thus stored in the cache 110 to the LU0 b(120 b). By repeating such a procedure, the data in the LU1 a(121 a) is completely copied into the LU0 b(120 b) ((1) of FIG. 8).
Note here that during such a copying process, the initiator init-b0 of the Host b(2 b) keeps accessing the LU1 a(121 a) of the SNa(1 a), i.e., target Targ-a1.
During the copying process, if the SNa(1 a) receives from the Host b(2 b) the writing command and the writing data to the LU1 a(121 a), the migration program 1129 of the SNa(1 a) writes the writing data to the LU1 a(121 a), and also forwards the writing data to the LU0 b(120 b) of the SNb(1 b). Then, the SNa(1 a) reports the Host b(2 b) that the writing process is through, i.e., periodical data writing to the LU0 b(120 b).
As an alternative manner, storage regions storing different data between the migration source LU1 a(121 a) and the migration destination LU0 b(120 b) may be managed by the SNa(1 a) using a differential bit map. To be specific, the SNa(1 a) makes a registration of a differential bit for any storage region on the differential bit map. Here, the storage region is the one not yet through with data copying from the LU1 a(121 a) to the LU0 b(120 b), and the one through with copying but thereafter showing no data coincidence between the LU1 a(121 a) and the LU0 b(120 b) due to data update in the LU1 a(121 a). This update is caused by reception of writing data addressed to the LU1 a(121 a) from the Host b(2 b). Based on the differential bit map, the SNa(1 a) may write the data stored in the LU1 a(121 a) to the LU0 b(120 b)after the data copying process is through only for the storage region having been registered with the differential bit. In this manner, the writing data received from the Host b(2 b) during the copying process can be copied to the LU0 b(120 b) being the migration destination.
As such, by the time when the copying process is through, the data in the LU1 a(121 a) and the data in the LU0 b(120 b) are to be the same ((1) of FIG. 8). This is the end of data copying.
16. Copying of Target (step 9008 of FIG. 9)
Once the copying process is through, the migration program 1129 of the SNa(1 a) instructs the LU control program 1128 to refer to the LU management table 1111 so that the target of the LU1 a(121 a), i.e., Targ-a1, and the initiator thereof, i.e., Init-b0, are acquired from the LU management table 1111 a of FIG. 5A. Then, the migration program 1129 of the SNa(1 a) uses any new or existing TCP connection between the SNa(1 a) and the SNb(1 b), e.g., the TCP connection established in step 9006, to transfer information about thus acquired initiators and targets of the LU1 a(121 a).
Then, the migration program 1129 of the SNb(1 b) issues an instruction to the LU management program 1128. The LU management program 1128 responsively enters, to the LU management table 1111 of the LU0 b(120 b) of FIG. 5C, Targ-a1 to Target, and Init-b0 to Initiator. More in detail, the LU management program 1128 enters the target and initiator of the LU1 a received from the SNa(1 a) to the LU management table of the LU0 b(120 b) to change the target and initiator of the LU0 b(120 b) to those of the LU1 a(121 a). In this manner, the data and the access information, i.e., target and initiator, of the LU1 a(121 a) of the SNa(1 a) are taken over by the LU0 b(120 b) of the SNb(1 b), and this is the end of LU migration.
After completion of LU migration as such, a completion notice is forwarded by the SNb(1 b) to the SNa(1 a), and by the SNa(1 a) to the structure management program 4122 of the management console 4. Upon reception of the completion notice, the management console 4 enters, also the its own LU management table 1111′, Targ-a1 to the Target of the LU0 b(120 b), and Init-b0 to the Initiator thereof.
As such, the LU migration process is completed.
17. Deletion of Initiator being Migration Source (step 9009 of FIG. 9)
After receiving the completion notice of LU migration, the structure management program 4122 of the management console 4 instructs the SNa(1 a) to go through initiator deletion. The SNa(1 a) responsively instructs the initiator control program 1130 to cut off the connection between the initiator Init-SNa1 and the target Targ-b0 used for data migration, and delete the initiator Init-SNa1. The initiator control program 1130 instructs the iSCSI control program 1122 to cut off the session between the initiator Init-SNa1 and the target Targ-b0. Also, the initiator control program 1130 deletes the initiator management table 1113 about the initiator Init-SNa1 from the memory 101, and instructs the name server 5 to delete the initiator management table 1113SNa1 about the initiator Init-SNa1.
The name server 5 instructed as such accordingly deletes the initiator management table 1113SNa1 having been registered in the name management table 5111.
As such, the initiator Init-SNa1 is deleted by following, in reverse, steps 9004 and 9005 of initiator registration.
The structure management program 4122 of the management console 4 also deletes the initiator management table 1113 of the initiator Init-SNa1 stored in its own memory.
18. Deletion of Migration Source Target (step 9010 of FIG. 9)
The structure management program 4122 of the management console 4 instructs the SNa(1 a) to cut off the session established between the target Targ-a1 set to the LU1 a(121 a) being the migration source and the initiator Init-b0 located in the Host b(2 b), and to delete the target Targ-a1 set to the migration source LU1 a(121 a).
The LU control program 1128 of the SNa(1 a) instructed as such then responsively issues an instruction toward the iSCSI control program 1122 to cut off the session between the initiator Init-b0 of the Host-b(2 b) and the target Targ-a1 of the SNa(1 a), and the iSCSI program 1122 responsively executes the instruction. The LU control program 1128 deletes, from the LU management table 1111 a of FIG. 5A, any entry relating to the LU1 a(121 a). As a result, the LU management table in the memory 101 of the SNa(1 a) looks like an LU management table 1111 a of FIG. 5D. Further, the SNa(1 a) deletes the entry of Targ-a1 from the target management table 1112 in the memory 101.
The communications program 1131 of the SNa(1 a) instructs the name server 5 to delete, also from the name management table 5111, any entry relating to the target Targ-a1 in the target management table 1112. The name server 5 then responsively goes through deletion as instructed ((2) of FIG. 8).
Here, the structure management program 4122 of the management console 4 deletes any entry relating to the LU1 a(121 a) from the LU management table 1111′ in its own memory, and also deletes the target management table relating to the target Targ-a1.
19. Change of Migration Destination Target (step 9011 of FIG. 9)
The structure management program 4122 of the management console 4 then instructs the SNb(1 b) to enter, to the name server 5, the target Targ-a1 having been set to the migration destination LU0 b(120 b) in step 9008.
The communications program 1131 of the SNb(1 b) instructed as such notifies, in a similar manner to step 9003, the name server 5 to change the target name and the initiator name in the target management table 1112 b of the name management table 5111 into target: Targ-a1, and initiator: Init-b0 ((3) of FIG. 8). The name management program 5122 of the name server 5 changes the name management table 5111 as notified. The resulting name management table 5111 looks like the one shown in FIG. 7B.
The target control program 1123 of the SNb(1 b) also applies the same change to be done by the name server 5. That is, the target management table 1113 stored in the memory 101 of the SNb(1 b) is changed similarly. Specifically, in the target management table 1113, target is changed from Targ-b0 to Targ-a1, and initiator is changed from Init-SNa1 to Init-b0 so as to include Target: Targ-a1, Initiator: Init-b0, Entity: SNb, Portal: Tb0, and PortalVr: TPGb0.
The structure management program 4122 of the management console 4 stores, into its own memory, a new target table 1113 of the target Targ-a1, which is including Target: Targ-a1, Initiator: Init-b0, Entity: SNb, Portal: Tb0, and PortalVr: TPGb0.
20. Execution of Discovery (step 9012 of FIG. 9)
In consideration of the initiator-target relation changed in step 9011, the name management program 5122 of the name server 5 issues a State Change Notification (SCN) to the initiators ((4) of FIG. 8). In response to such an SCN, the initiators each execute discovery to inquire the name server 5 whether any change has occurred to their own accessible targets.
After the Host b(2 b) receives the SCN, and after an inquiry is issued to the name server 5 through execution of discovery ((5) of FIG. 8), the Host b(2 b) is notified from the name server 5 of management information about the target Targ-a1 relating to the initiator Init-b0. Here, the management information is the one registered in the target management table 1112 b of the target Targ-a1. Accordingly, this tells the Host b(2 b) that the target Targ-a1 relating to the initiator Init-b0 has moved to the SNb(1 b).
Thus, a TCP/IP program (not shown) of the Host b(2 b) establishes a new TCP connection between the TCP port of the Host b(2 b) and the TCP port of the SNb(1 b).
Then, the iSCSI control program (not shown) of the Host b(2 b) goes through an iSCSI log-in process to the SNb(1 b) to establish a new iSCSI session between the portal Ib0 of the Host b(2 b) and the portal Tb0 of the SNb(1 b). As a result, a communications path using iSCSI is established between the Host b(2 b) and the SNb(1 b), and thus path switching is completed ((6) of FIG. 8). Accordingly, hereinafter, if the initiator Init-b0 of the Host b(2 b) forwards a writing command and writing data to the target Targ-a1, the SNb(1 b) including the target Targ-a1 receives the command and data. The writing data is thus stored in the LU0 b(120 b) including the target Targ-a1.
In the present embodiment, when data stored in the LU1 a(121 a) of the SNa(1 a) is migrated into the LU0 b(120 b) of the SNb(1 b) being the migration destination, the LU0 b(120 b) takes over not only the data but also access information. Here, the access information includes target names of targets set to the LU1 a(121 a) being the migration source, and initiator names of initiators allowed to access the targets. Therefore, the Host b(2 b) having gone through discovery acknowledges that the target Targ-a1 corresponding to its initiator init-b0 is changed in location from SNa(1 a) to SNb(1 b). That is, the Host b(2 b) does not acknowledge that the target has been changed. This is because the target name Targ-a1 corresponding to the initiator Init-b0 shows no change even after data migration. Thus, in the present embodiment, as long as the target name Targ-a1 is not changed, even if the location of the target is changed, the data stored in the LU corresponding to the target is guaranteed as not having been changed. That is, the Host 2 can access the same data as long as accessing the target having the same target name.
If the session is temporarily cut off In step 9010 between the initiator Init-b0 of the Host b(2 b) and the target Targ-a1 of the SNa(1 a), the session from the Host b(2 b) is temporarily cut off until a session is established in step 9012 between the initiator Init-b0 of the Host b(2 b) and the target Targ-a1 of the SNb(1 b). However, the iSCSI command process generally has a retry mechanism, and thus if no command is received by the target, the Host b(2 b) continuously retries for duration of 10 seconds. During this duration, if an SCN is issued, if discovery is completed, and if a new session is established between the initiator Init-b0 of the Host b(2 b) and the target Targ-a1 of the SNb(1 b), the application executed by the Host b(2 b) does not acknowledge such a momentarily cut-off. Thus, without interrupting the application of the Host 2, data migration can be performed from any specific SN 1 to another SN 1. In such a manner, without interrupting the application of the Host 2, the SN 1 can be additionally provided, and the load can be distributed among a plurality of SNs 1 connected to the switch 3.
What is better, the programs applying control over layers lower to the operating system of the Host b(2 b) such as the TCP/IP program and the iSCSI control program acknowledge that the location of the target Targ-a1 is changed due to data migration as above. The issue here is that, the TCP/IP program and the iSCSI control program establish a TCP connection and an iSCSI session. Thus, the operating system of the Host b(2 b) does not necessarily have to acknowledge the location of the target as long as the LU is acknowledged as a logical volume. In view thereof, the operating system of the Host b(2 b) and the application program operating thereon do not acknowledge that data migration has been executed. That is, data migration can be favorably performed without causing the operating system of the Host 2 and the application program to notice data migration among the SNs 1.
21. Method for Target Generation
Next, the method for target generation is described in more detail. The target name has to be a unique identifier. To retain such a uniqueness of the target name, an exemplary method is described below.
Assuming here is that a target name is a character string of an appropriate length. An exemplary character string is a combination of various codes and numbers, e.g., a code identifying a manufacturing company, a code identifying a specific organization in the manufacturing company, a code for identifying a storage system, a code for identifying the type of a storage node, a code of a revision of the storage node, a serial number of the storage node, and a sequential number assigned to a target in the storage node. With such a structure, even if any new target is generated in a certain storage node, the newly-generated target can be provided with a target name unique thereto only by incrementing the sequential number.
In the present embodiment above, when data in the LU 12Xx is migrated from a specific SN 1 to another, the LU 12Xx being the migration destination takes over the target name of the LU 12Xx being the migration source. As such, even if the target name is passed between the SNs, the target name remains unique. Thus, the target name can be continuously used by the SN 1 being the migration destination after taken over.
Herein, it is preferable to use nonvolatile memory such as Flash memory for the CTL 10 of the storage node 1 for storing the maximum value of the sequential number used at the time providing a target name to the target in the SN 1. Here, the maximum value of the sequential number is the maximum value of the sequential number already in use. With such a structure, even if power failure or error occurs to the SN 1, the Flash memory has stored the sequential number. Thus, after recovery, the SN 1 can keep generating a series of unique numbers to any new targets set in the SN 1 only by incrementing thus stored sequential number.
Note here that, shown in the above embodiment is the example of taking over a target name provided to any specific LU 12Xx in response to data migration from the LU 12Xx to another. Alternatively, at the time of data migration, the LU 12Xx being the migration destination may be provided with any new target name. If this is the case, to the LU 12Xx being the migration destination, a target name unique to the destination SN 1 can be set using a sequential number of the SN 1, the serial number of the SN 1, a revision code of the SN 1, and others. If any new target name is set to the LU 12Xx being the destination, the LU control program 1128 of the SNb(1 b) enters in step 9008 of FIG. 9 thus newly-set target name to the LU management table 1111. Also in step 9011, the SNb(1 b) is required to enter the newly-set target name to the name server 5. As a result, at the time of discovery of step 9012, the initiator Init-b0 of the Host b(2 b) detects the new target, enabling the initiator to construct a session with the target.
22. Setting of Target
In the above embodiment, shown is the example that the SN 1 generates target or initiator for registration into the name server 5. Instead of the SNs 1 generating the target and initiator as such, the name server 5 may generate those. If this is the case, the SNs 1 issue an instruction for the name server 5 to enter the target and initiator, and in return, the name server 5 forwards the target and initiator back to the corresponding SN 1. Then, the SN 1 makes an entry of the target and initiator received by the name server 5.
23. Display Screen of Management Console (FIG. 15)
FIG. 15 shows an exemplary display screen of the management console 4.
The structure management program 4122 of the management console 4 displays on its screen the LU management table 1111′, the target management table 1112, and the initiator management table 2112 or 1113, all of which are stored in the memory of the management console 4. FIGS. 15C and 15D both show such a display screen. Specifically, FIG. 15C shows an exemplary display screen before data migration, and FIG. 15D shows an exemplary display screen after data migration.
The structure management program 4112 displays on its screen the LU management table 1111′, the target management table 1112, the initiator management table 2112 or 1113, and pointers therefor. Thus, a manager using the management consoler 4 can easily grasp the relationship between the LU and the initiator or the target from the information displayed on the display screen.
The structure management program 4112 also displays the system structure on its screen based on the LU management table 1111′, the target management table 1112, and the initiator management table 2112 or 1113 stored in the memory of the management console 4. FIGS. 15A and 15B both show such a display screen. Specifically, FIG. 15A shows the system structure before data migration, and FIG. 15B shows the system structure after data migration.
FIGS. 15A and 15B both show a display screen in a case where the LU-b(120 b) being the migration destination takes over the target name set to the LU1 a(121 a) being the migration source. Once data migration is performed, the target name is taken over from the migration source LU to the migration destination LU, causing the target Targ-a1 to be changed in location on the display screen before and after data migration. However, the combination of initiator and target remains the same, i.e., pair of init-a0 and Targ-a0, and pair of init-b0 and Targ-a1. As such, even if data migration is performed between the SNs 1, no change occurs to the combination of initiator and target. Accordingly, this eases the management of initiators and targets for the manager in the system using the management console 4.
Note here that the information displayed on the display screen is updated every time the LU management table 1111′, the target management table 1112, or the initiator management table 2112 or 1113 is updated. Such update is performed responding to an instruction coming from the structure management program to the SNs 1 as described by referring to FIG. 9, or a notification received by the structure management program from the SNs 1 about any change applied to the system structure.
Second Embodiment
Described next is a second embodiment. In the first embodiment, exemplified is the case of migrating data stored in the LU1 a(121 a) of the SNa(1 a) to the SNb(1 b), which is newly added. In the second embodiment, as shown in FIG. 10, an SNc(1 c) is additionally added to the switch 3, and the LU0 a(120 a) left in the SNa(1 a) is migrated to thus newly-added SNc(1 c).
The LU0 a(120 a) with the target Targ-a0 in the SNa(1 a) is connected with the initiator Init-a0 of the Host a(2 a). Thus, in the second embodiment, the initiator-target relationship is different from that in the first embodiment, and the discovery and other processes are to be executed by the Host a(2 a). The procedure, however, remains the same that the data in the LU0 a(120 a) of the SNa(1 a) is migrated to the LU0 c(120 c) of the SNc(1 c), the LU0 c(120 c) being the migration destination takes over the target Targ-a0 of the LU0 a(120 a), and the access path is changed between the initiator Init-a0 and the target Targ-a0.
After completion of such data migration, the SNa(1 a) has no LU 12Xx to be accessed by the Hosts 2. Accordingly, the SNa(1 a) can be removed from the switch 3, leading to reduction of the SN.
Utilizing the process as such, the SNa(1 a) can be replaced to the SNc(1 c) without interrupting access from the Hosts 2. More in detail, during the process of changing the access path from the Hosts 2 by migrating the data stored in the LU0 a(120 a) of the SNa (1 a) to the newly-added SNc(1 c), the Hosts 2 can be accessible to the data stored in these both LUs. Thus, even if data storage is required for a longer time than the SN lasts, i.e., if data lasts longer than the SN, due to law, for example, data remains available through exchange of any out-of-life storage node 1 instead of replacing the storage system 1000 in its entirety.
According to the present embodiment, data storage can be achieved over a long period of time as long as data lasts while suppressing cost increase required for system replacement without temporary data saving, and without interrupting data access.
Third Embodiment
FIG. 11 is a diagram showing another exemplary system structure. A third embodiment has differences from the first and second embodiments that the storage node 1 has two controllers of CTL0 and CTL1, and LU120 x are so structured as to be accessible by these two controllers 10. Moreover, the network 30 is provided with two switches of 0(3) and 1(31), and the Hosts 2 and the storage nodes 1 are each connected to these two switches. In the present embodiment, the wiring between the LU120 x and CTL10, the wiring between the SN 1 and the switch, and the wiring between the Hosts 2 and the switch are all doubly provided. In such a manner, the resulting storage system can be high in reliability. The method for replacing the storage node 1 and the load distribution through LU migration is the same as that in the first and second embodiments.
Fourth Embodiment
FIG. 12 is a diagram showing another exemplary system structure. In the present embodiment, the storage system 1000 is provided with a plurality of CTL 10, and these CTL 10 share the LU 12Xx via a disk connector 150. Add-in and removal of the SNs in the first and second embodiments correspond to add-in and removal of the CTL 10. As an example, a CTLc(10 c) may be added as a replacement for the out-of-life CTLa(10 a), and after thus newly-added CTLc(10 c) takes over the LU 12Xx that was under the control of the CTLa(10 a), the CTLa(10 a) is removed. At this time, the procedure taken for taking over the LU management information in the LU management table 1111 of the CTLa(10 a), for taking over the target in the target management table 1112 of the CTLa(10 a), and for changing the access path is executed in the same manner as that in the first and second embodiments. Herein, the CTLs 10 are each connected to the corresponding LU 12Xx via the disk connector 150, thus there is no need for data migration from the LU 12Xx. For example, to take over the LU0(120 a) that was under the control of the CTLa(10 a) to the CTLc(10 c), the CTLc(10 c) is allowed to access the LU0(120 a) through the disk connector 150. Here, exclusive control is to be exercised, and thus the same procedure in the first and second embodiments are to be executed for the CTLc(10 c) to take over the LU management information about the LU0(120 a) from the CTLa(10 a), take over target information set to the LU(120 a), i.e., contents of the target management table 1112 about the target, and others. The procedure can skip the data copying process. In this manner, cost efficiency and system change can be swiftly done to a greater degree.
Fifth Embodiment
FIG. 13 is a diagram showing still another exemplary system structure. In the present embodiment, the switch 3 and the management console 4 are included in the storage system 1000. The switch 3, the management console 4, and the SN 1 are all components of the storage system 1000, and the user is provided those as a set. As a preferred embodiment, these components are so structured as a unit, providing the user with better manageability.
Sixth Embodiment
FIG. 14 is a diagram showing still another exemplary system structure. In the present embodiment, the management console 4 of FIG. 13 is not provided, and the structure management program 4122 in the management console 4 of the above embodiments is provided to the CTL(10) of the respective storage nodes. Whenever any structure change occurs, the structure management program 4122 communicates with other structure management programs 4122 to see what structure change has occurred. Further, prior to structure change, exclusive control is applied to any needed resources. Such a structure eliminates the management console 4, leading to the storage system with better cost efficiency.
In the above embodiments, the access path from the host is changed after LU data migration is performed. This change may be done in the following order:
1. Migrate LU information (target information and initiator access permission information included)
2. Switching of access path from host to migration destination (migration of target name, and registration change of name server included)
3. LU data migration
If this is the case, data access during migration can be handled in the same manner as the background technology. Also in this case, the same effects as the other embodiments can be successfully achieved. Specifically, LU migration can be performed without causing the operating system and the applications of the hosts to notice, which is the characteristics of the present invention.

Claims (21)

1. A data migration method in a system which includes a first storage node including a first logical unit assigned to a first target of an iSCSI protocol, a second storage node coupled to the first storage node, a computer coupled to the first storage node and the second storage node, and a name server coupled to the first storage node, the second storage node and the computer, the data migration method transferring data stored in the first logical unit to the second storage node comprising:
accessing from the computer to the first logical unit in the first storage node based on a first relation between the first storage node and the first target by using the iSCSI protocol;
indicating the second storage node to configure a second logical unit;
configuring the second logical unit in the second storage node;
sending the data in the first logical unit, by the first storage node, to the second logical unit in the second storage node;
sending, by the first storage node, a first information of the first target to the second storage node;
assigning the first target to the second logical unit at the second storage node;
notifying, by the second storage node, the name server of a second relation between the second storage node and the first target after assignment of the first target to the second logical unit by using the iSCSI protocol;
notifying, by the name server, the computer of the second relation after receiving notification of the second relation from the second storage node by using the iSCSI protocol;
accessing from the computer to the second logical unit in the second storage node instead of the first logical unit in the first storage node based on the second relation between the first target and the second storage node after receiving notification of the second relation from the name server by the iSCSI protocol.
2. A data migration method according to claim 1, wherein the step of indicating the second storage node to configure a second logical unit is performed by a management computer.
3. A data migration method according to claim 1, wherein the step of sending the data in the first logical unit to the second logical unit in the second storage node is based upon a request from a management computer.
4. A data migration method according to claim 1 further comprising:
deleting, at the first storage node, the first relation between the first storage node and the first target after assigning the first target to the second logical unit.
5. A data migration method according to claim 1,
wherein the first information of the first target is a first target name related to the first target, and
wherein, in the assigning step, the second storage node assigns the first target name of the first target to the second logical unit.
6. A data migration method according to claim 1,
wherein the first information includes an identifier of the computer which is permitted to access the first target, and
wherein, in the assigning step, the second storage node configures the second logical unit to be permitted to access by the computer by using the first information.
7. A data migration method according to claim 1 further comprising:
before starting the sending the data step, configuring the second logical unit in the second storage node to be permitted to access by the first storage node; and
informing, by the second storage node, the first storage node of permission to access the second logical unit by using the name server.
8. A data migration method according to claim 7 further comprising:
in the informing step,
notifying, by the second storage node, the name server of information of a second target which is assigned to the second logical unit; and
notifying, by the name server, the first storage node of information of the second target, and
wherein the first storage node uses the second target to send the data to the second storage node during the sending the data step.
9. A data migration method according to claim 8 further comprising:
after completion of the sending the data step,
deleting, by the second storage node, an assignment of the second target to the second logical unit; and
notifying, by the second storage node, the name server of deletion of the assignment.
10. A data migration method according to claim 1 further comprising:
receiving, at the first storage node, a second data which is used to update a part of the data stored in the first logical unit from the computer during the sending the data step;
sending, by the first storage node, the second data to the second storage node;
notifying, by the second storage node, the first storage node of completion of receiving the second data at the second storage node; and
notifying, by the first storage node, the computer of completion of updating the part of the data after receiving a notification of the completion of receiving the second data from the second storage node.
11. A data migration method according to claim 1 further comprising:
receiving a second data, which is used to update a part of the data stored in the fist logical unit, from the computer at the first storage node during the sending step;
storing the second data to a location in the first logical unit where the part of the data is stored in;
recording information of the location where the second data is stored in;
sending completion notification of storing the second data from the first storage node to the computer; and
sending the second data from the first storage node to the second storage node based on the information of the location.
12. A system comprising:
a first storage node including a first controller and a plurality of first disk devices coupled to the first controller;
a second storage node, coupled to the first storage node, including a second controller and a plurality of second disk drives coupled to the second controller;
a computer, coupled to the first storage node and the second storage node, accessing a first logical unit, configured by the first controller of the first storage node on the plurality of first disk devices, by using a first target of iSCSI protocol assigned to the first logical device; and
a name server coupled to the first storage node, the second storage node and the computer;
wherein the first controller of the first storage node sends data stored in the first logical unit to the second storage node,
the second controller of the second storage node stores the data to a second logical unit which is configured on the plurality of second disk drives,
the first controller sends a first information of the first target to the second storage node,
the second controller assigns the first target to the second logical unit in response to receiving the first information from the first controller,
the second controller notifies the name server of a second information including a relation between the first target and the second storage node after assigning the first target to the second logical unit,
the name server notifies the computer of the second information after receiving the second information from the second storage node, and
the computer accesses the second logical unit in the second storage node instead of the first logical unit in the first storage node by using the first target based on the second information after receiving the second information from the name server.
13. A system according to claim 12, wherein the first controller of the first storage node sends data stored in the first logical unit to the second storage node based upon an instruction from a management computer.
14. A system according to claim 12,
wherein the first controller of the first storage node deletes a relationship between the first target and the first logical unit in response to an assignment of the first target to the second logical unit by the second controller of the second storage node.
15. A system according to claim 12,
wherein the first information includes a first target name of the first target, and
the second controller of the second storage node assigns the first target name to the second logical unit when the first target is assigned to the second logical unit.
16. A system according to claim 12,
wherein the first information includes an identifier of the computer which is permitted to access the first target, and
the second controller of the second storage node configures the second logical unit to be permitted to access by the computer by using the first information.
17. A system according to claim 12,
wherein, before the first controller of the first storage node sending the data, the second controller of the second storage node configures the second logical unit to be permitted to access by the first storage node and informs the first storage node of permission to access the second logical unit by using the name server, and
the first controller sends the data to the second storage node based on information of the permission to access the second logical unit.
18. A system according to claim 17,
wherein, when the second controller of the second storage node informs the first storage node of the permission to access the second logical unit, the second controller notifies the name server of information of a second target which is assigned to the second logical unit,
the name server notifies the first storage node of the information of the second target, and
the first controller of the first storage node sends the data to the second storage node by using the second target.
19. A system according to claim 18,
wherein the second controller deletes the second target from the second logical unit after completion of data transfer from the first storage node to the second storage node and notifies the name server of deletion of the second target.
20. A system according to claim 12,
wherein the first controller sends a second data, which is used to update a part of the data stored in the first logical unit, to the second storage node if the first controller receives the second data from the computer during the first computer is sending the data to the second storage node,
the second controller sends a notification of completion of receiving the second data to the first controller in response to receiving the second data, and
the first controller notifies the computer of completion of updating the part of the data after receiving the notification of completion of receiving the second data from the second storage node.
21. A system according to claim 12,
wherein the first controller records information of location of a part of the data to be updated by a second data when the first controller receives the second data from the computer during the first controller is sending the data to the second storage node, notifies the computer of completion of storing the second data into the first logical unit, and sends the second data from the first logical unit to the second storage node based on the information of location.
US10/879,424 2004-05-10 2004-06-28 Data migration in storage system Expired - Fee Related US7124143B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/120,447 US7472240B2 (en) 2004-05-10 2005-05-02 Storage system with plural control device affiliations
US11/234,459 US7912814B2 (en) 2004-05-10 2005-09-23 Data migration in storage system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2004139306 2004-05-10
JP2004-139306 2004-05-10

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US11/120,447 Continuation-In-Part US7472240B2 (en) 2004-05-10 2005-05-02 Storage system with plural control device affiliations
US11/234,459 Continuation US7912814B2 (en) 2004-05-10 2005-09-23 Data migration in storage system

Publications (2)

Publication Number Publication Date
US20060004876A1 US20060004876A1 (en) 2006-01-05
US7124143B2 true US7124143B2 (en) 2006-10-17

Family

ID=34934430

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/879,424 Expired - Fee Related US7124143B2 (en) 2004-05-10 2004-06-28 Data migration in storage system
US11/234,459 Expired - Fee Related US7912814B2 (en) 2004-05-10 2005-09-23 Data migration in storage system

Family Applications After (1)

Application Number Title Priority Date Filing Date
US11/234,459 Expired - Fee Related US7912814B2 (en) 2004-05-10 2005-09-23 Data migration in storage system

Country Status (3)

Country Link
US (2) US7124143B2 (en)
EP (1) EP1596275A3 (en)
CN (2) CN100409202C (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060059188A1 (en) * 2004-09-16 2006-03-16 Tetsuya Shirogane Operation environment associating data migration method
US20060075400A1 (en) * 2004-09-28 2006-04-06 Marc Jarvis System and method for data migration integration with information handling system manufacture
US20070201470A1 (en) * 2006-02-27 2007-08-30 Robert Martinez Fast database migration
US20070208836A1 (en) * 2005-12-27 2007-09-06 Emc Corporation Presentation of virtual arrays using n-port ID virtualization
US20070263637A1 (en) * 2005-12-27 2007-11-15 Emc Corporation On-line data migration of a logical/virtual storage array
US20080059744A1 (en) * 2006-08-29 2008-03-06 Hitachi, Ltd. Storage system, and data management and migration method
US20090049441A1 (en) * 2006-02-10 2009-02-19 Mitsubishi Electric Corporation Remote Update System for Elevator Control Program
US7685395B1 (en) 2005-12-27 2010-03-23 Emc Corporation Spanning virtual arrays across multiple physical storage arrays
US7697554B1 (en) * 2005-12-27 2010-04-13 Emc Corporation On-line data migration of a logical/virtual storage array by replacing virtual names
US20100162044A1 (en) * 2004-08-09 2010-06-24 Siew Yong Sim-Tang Method for erasure coding data across a plurality of data stores in a network
US20100162076A1 (en) * 2004-08-09 2010-06-24 Siew Yong Sim-Tang Method for lock-free clustered erasure coding and recovery of data across a plurality of data stores in a network
US7757059B1 (en) 2006-06-29 2010-07-13 Emc Corporation Virtual array non-disruptive management data migration
US20100287345A1 (en) * 2009-05-05 2010-11-11 Dell Products L.P. System and Method for Migration of Data
US8072987B1 (en) 2005-09-30 2011-12-06 Emc Corporation Full array non-disruptive data migration
US8107467B1 (en) * 2005-09-30 2012-01-31 Emc Corporation Full array non-disruptive failover
US8452928B1 (en) 2006-06-29 2013-05-28 Emc Corporation Virtual array non-disruptive migration of extended storage functionality
US8533408B1 (en) 2006-06-29 2013-09-10 Emc Corporation Consolidating N-storage arrays into one storage array using virtual array non-disruptive data migration
US8539177B1 (en) 2006-06-29 2013-09-17 Emc Corporation Partitioning of a storage array into N-storage arrays using virtual array non-disruptive data migration
US8583861B1 (en) 2006-06-29 2013-11-12 Emc Corporation Presentation of management functionality of virtual arrays
US8589504B1 (en) 2006-06-29 2013-11-19 Emc Corporation Full array non-disruptive management data migration
US9063895B1 (en) 2007-06-29 2015-06-23 Emc Corporation System and method of non-disruptive data migration between heterogeneous storage arrays
US9063896B1 (en) 2007-06-29 2015-06-23 Emc Corporation System and method of non-disruptive data migration between virtual arrays of heterogeneous storage arrays
US9098211B1 (en) 2007-06-29 2015-08-04 Emc Corporation System and method of non-disruptive data migration between a full storage array and one or more virtual arrays
US11115735B2 (en) * 2017-05-30 2021-09-07 Commscope Technologies Llc Reconfigurable optical networks

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8374966B1 (en) 2002-08-01 2013-02-12 Oracle International Corporation In memory streaming with disk backup and recovery of messages captured from a database redo stream
JP4446839B2 (en) * 2004-08-30 2010-04-07 株式会社日立製作所 Storage device and storage management device
US7761678B1 (en) 2004-09-29 2010-07-20 Verisign, Inc. Method and apparatus for an improved file repository
JP2006107158A (en) * 2004-10-06 2006-04-20 Hitachi Ltd Storage network system and access control method
JP4814617B2 (en) * 2005-11-01 2011-11-16 株式会社日立製作所 Storage system
JP2007140699A (en) * 2005-11-15 2007-06-07 Hitachi Ltd Computer system and storage device and management server and communication control method
US20070274231A1 (en) * 2006-05-24 2007-11-29 Dell Products L.P. System and method for improving the performance and stability of Serial Attached SCSI networks
US20080071502A1 (en) * 2006-09-15 2008-03-20 International Business Machines Corporation Method and system of recording time of day clock
US7725894B2 (en) * 2006-09-15 2010-05-25 International Business Machines Corporation Enhanced un-privileged computer instruction to store a facility list
JP5327497B2 (en) * 2007-07-11 2013-10-30 日立オートモティブシステムズ株式会社 Map data distribution system and map data update method
US8799213B2 (en) 2007-07-31 2014-08-05 Oracle International Corporation Combining capture and apply in a distributed information sharing system
US7801852B2 (en) * 2007-07-31 2010-09-21 Oracle International Corporation Checkpoint-free in log mining for distributed information sharing
US9230002B2 (en) 2009-01-30 2016-01-05 Oracle International Corporation High performant information sharing and replication for single-publisher and multiple-subscriber configuration
US8707105B2 (en) * 2010-11-01 2014-04-22 Cleversafe, Inc. Updating a set of memory devices in a dispersed storage network
US20150089382A1 (en) * 2013-09-26 2015-03-26 Wu-chi Feng Application context migration framework and protocol
CN107844275A (en) * 2017-11-22 2018-03-27 郑州云海信息技术有限公司 A kind of moving method of data, device and medium
US10481823B2 (en) * 2018-02-21 2019-11-19 International Business Machines Corporation Data storage system performing data relocation based on temporal proximity of accesses
US10922268B2 (en) 2018-08-30 2021-02-16 International Business Machines Corporation Migrating data from a small extent pool to a large extent pool
US11016691B2 (en) 2019-01-25 2021-05-25 International Business Machines Corporation Migrating data from a large extent pool to a small extent pool

Citations (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5708812A (en) 1996-01-18 1998-01-13 Microsoft Corporation Method and apparatus for Migrating from a source domain network controller to a target domain network controller
US5734859A (en) 1993-10-14 1998-03-31 Fujitsu Limited Disk cache apparatus having selectable performance modes
US5734922A (en) * 1996-07-01 1998-03-31 Sun Microsystems, Inc. Multiprocessing system configured to detect and efficiently provide for migratory data access patterns
US5832274A (en) 1996-10-09 1998-11-03 Novell, Inc. Method and system for migrating files from a first environment to a second environment
US5918249A (en) * 1996-12-19 1999-06-29 Ncr Corporation Promoting local memory accessing and data migration in non-uniform memory access system architectures
JP2000187608A (en) 1998-12-24 2000-07-04 Hitachi Ltd Storage device sub-system
US6108748A (en) 1995-09-01 2000-08-22 Emc Corporation System and method for on-line, real time, data migration
US6115463A (en) * 1997-11-21 2000-09-05 Telefonaktiebolaget Lm Ericsson (Publ) Migration of subscriber data between home location registers of a telecommunications system
US6230239B1 (en) * 1996-12-11 2001-05-08 Hitachi, Ltd. Method of data migration
US6240494B1 (en) 1997-12-24 2001-05-29 Hitachi, Ltd. Subsystem replacement method
US20010047460A1 (en) 2000-04-25 2001-11-29 Naotaka Kobayashi Remote copy system of storage systems connected to fibre network
US6336172B1 (en) 1999-04-01 2002-01-01 International Business Machines Corporation Storing and tracking multiple copies of data in a data storage library system
US20020019922A1 (en) * 2000-06-02 2002-02-14 Reuter James M. Data migration using parallel, distributed table driven I/O mapping
US6421711B1 (en) 1998-06-29 2002-07-16 Emc Corporation Virtual ports for data transferring of a data storage system
US20020112008A1 (en) 2000-02-22 2002-08-15 Christenson Nikolai Paul Electronic mail system with methodology providing distributed message store
US20030028555A1 (en) * 2001-07-31 2003-02-06 Young William J. Database migration
JP2003108315A (en) 2002-09-02 2003-04-11 Hitachi Ltd Storage subsystem
US20030074523A1 (en) * 2001-10-11 2003-04-17 International Business Machines Corporation System and method for migrating data
US20030093442A1 (en) * 2001-11-12 2003-05-15 Kazuhiko Mogi Storage apparatus acquiring static information related to database management system
US20030093439A1 (en) * 2001-11-12 2003-05-15 Kazuhiko Mogi Method and apparatus for relocating data related to database management system
US20030110237A1 (en) 2001-12-06 2003-06-12 Hitachi, Ltd. Methods of migrating data between storage apparatuses
US20030115447A1 (en) 2001-12-18 2003-06-19 Duc Pham Network media access architecture and methods for secure storage
US20030135511A1 (en) 2002-01-11 2003-07-17 International Business Machines Corporation Method, apparatus, and program for separate representations of file system locations from referring file systems
US20030140193A1 (en) 2002-01-18 2003-07-24 International Business Machines Corporation Virtualization of iSCSI storage
US20030182330A1 (en) 2002-03-19 2003-09-25 Manley Stephen L. Format for transmission file system information between a source and a destination
US6654830B1 (en) 1999-03-25 2003-11-25 Dell Products L.P. Method and system for managing data migration for a storage system
US20040049553A1 (en) 2002-09-05 2004-03-11 Takashige Iwamura Information processing system having data migration device
US6715031B2 (en) 2001-12-28 2004-03-30 Hewlett-Packard Development Company, L.P. System and method for partitioning a storage area network associated data library
US20040068629A1 (en) 2001-08-10 2004-04-08 Hitachi, Ltd. Apparatus and method for online data migration with remote copy
US20040088483A1 (en) 2002-11-04 2004-05-06 Paresh Chatterjee Online RAID migration without non-volatile memory
US20040117546A1 (en) 2002-12-11 2004-06-17 Makio Mizuno iSCSI storage management method and management system
US20040139237A1 (en) 2002-06-28 2004-07-15 Venkat Rangan Apparatus and method for data migration in a storage processing device
US20040143642A1 (en) 2002-06-28 2004-07-22 Beckmann Curt E. Apparatus and method for fibre channel data processing in a storage process device
US6772306B2 (en) 1998-03-24 2004-08-03 Hitachi, Ltd. Data saving method and external storage device
US20040172512A1 (en) 2003-02-28 2004-09-02 Masashi Nakanishi Method, apparatus, and computer readable medium for managing back-up
US20040225719A1 (en) 2003-05-07 2004-11-11 International Business Machines Corporation Distributed file serving architecture system with metadata storage virtualization and data access at the data server connection speed
US20050005062A1 (en) 2003-07-02 2005-01-06 Ling-Yi Liu Redundant external storage virtualization computer system
US20050010688A1 (en) 2003-06-17 2005-01-13 Hitachi, Ltd. Management device for name of virtual port
US20050033878A1 (en) 2002-06-28 2005-02-10 Gururaj Pangal Apparatus and method for data virtualization in a storage processing device
US6950833B2 (en) 2001-06-05 2005-09-27 Silicon Graphics, Inc. Clustered filesystem

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4164986B2 (en) * 2000-04-21 2008-10-15 沖電気工業株式会社 Data transfer method, node device, and communication network system
US6976134B1 (en) * 2001-09-28 2005-12-13 Emc Corporation Pooling and provisioning storage resources in a storage network
JP4311636B2 (en) * 2003-10-23 2009-08-12 株式会社日立製作所 A computer system that shares a storage device among multiple computers

Patent Citations (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5734859A (en) 1993-10-14 1998-03-31 Fujitsu Limited Disk cache apparatus having selectable performance modes
US6108748A (en) 1995-09-01 2000-08-22 Emc Corporation System and method for on-line, real time, data migration
US6356977B2 (en) * 1995-09-01 2002-03-12 Emc Corporation System and method for on-line, real time, data migration
US5708812A (en) 1996-01-18 1998-01-13 Microsoft Corporation Method and apparatus for Migrating from a source domain network controller to a target domain network controller
US5734922A (en) * 1996-07-01 1998-03-31 Sun Microsystems, Inc. Multiprocessing system configured to detect and efficiently provide for migratory data access patterns
US5832274A (en) 1996-10-09 1998-11-03 Novell, Inc. Method and system for migrating files from a first environment to a second environment
US6230239B1 (en) * 1996-12-11 2001-05-08 Hitachi, Ltd. Method of data migration
US5918249A (en) * 1996-12-19 1999-06-29 Ncr Corporation Promoting local memory accessing and data migration in non-uniform memory access system architectures
US6115463A (en) * 1997-11-21 2000-09-05 Telefonaktiebolaget Lm Ericsson (Publ) Migration of subscriber data between home location registers of a telecommunications system
US6240494B1 (en) 1997-12-24 2001-05-29 Hitachi, Ltd. Subsystem replacement method
US6772306B2 (en) 1998-03-24 2004-08-03 Hitachi, Ltd. Data saving method and external storage device
US6421711B1 (en) 1998-06-29 2002-07-16 Emc Corporation Virtual ports for data transferring of a data storage system
JP2000187608A (en) 1998-12-24 2000-07-04 Hitachi Ltd Storage device sub-system
US6654830B1 (en) 1999-03-25 2003-11-25 Dell Products L.P. Method and system for managing data migration for a storage system
US6336172B1 (en) 1999-04-01 2002-01-01 International Business Machines Corporation Storing and tracking multiple copies of data in a data storage library system
US20020112008A1 (en) 2000-02-22 2002-08-15 Christenson Nikolai Paul Electronic mail system with methodology providing distributed message store
US20010047460A1 (en) 2000-04-25 2001-11-29 Naotaka Kobayashi Remote copy system of storage systems connected to fibre network
US20020019922A1 (en) * 2000-06-02 2002-02-14 Reuter James M. Data migration using parallel, distributed table driven I/O mapping
US6950833B2 (en) 2001-06-05 2005-09-27 Silicon Graphics, Inc. Clustered filesystem
US20030028555A1 (en) * 2001-07-31 2003-02-06 Young William J. Database migration
US20040068629A1 (en) 2001-08-10 2004-04-08 Hitachi, Ltd. Apparatus and method for online data migration with remote copy
US20030074523A1 (en) * 2001-10-11 2003-04-17 International Business Machines Corporation System and method for migrating data
US20030093439A1 (en) * 2001-11-12 2003-05-15 Kazuhiko Mogi Method and apparatus for relocating data related to database management system
US20030093442A1 (en) * 2001-11-12 2003-05-15 Kazuhiko Mogi Storage apparatus acquiring static information related to database management system
US20030110237A1 (en) 2001-12-06 2003-06-12 Hitachi, Ltd. Methods of migrating data between storage apparatuses
US20030115447A1 (en) 2001-12-18 2003-06-19 Duc Pham Network media access architecture and methods for secure storage
US6715031B2 (en) 2001-12-28 2004-03-30 Hewlett-Packard Development Company, L.P. System and method for partitioning a storage area network associated data library
US20030135511A1 (en) 2002-01-11 2003-07-17 International Business Machines Corporation Method, apparatus, and program for separate representations of file system locations from referring file systems
US20050262102A1 (en) 2002-01-11 2005-11-24 Anderson Owen T Method, apparatus, and program for separate representations of file system locations from referring file systems
US6931410B2 (en) 2002-01-11 2005-08-16 International Business Machines Corporation Method, apparatus, and program for separate representations of file system locations from referring file systems
US20030140193A1 (en) 2002-01-18 2003-07-24 International Business Machines Corporation Virtualization of iSCSI storage
US20030182330A1 (en) 2002-03-19 2003-09-25 Manley Stephen L. Format for transmission file system information between a source and a destination
US20040143642A1 (en) 2002-06-28 2004-07-22 Beckmann Curt E. Apparatus and method for fibre channel data processing in a storage process device
US20040139237A1 (en) 2002-06-28 2004-07-15 Venkat Rangan Apparatus and method for data migration in a storage processing device
US20050033878A1 (en) 2002-06-28 2005-02-10 Gururaj Pangal Apparatus and method for data virtualization in a storage processing device
JP2003108315A (en) 2002-09-02 2003-04-11 Hitachi Ltd Storage subsystem
US20040049553A1 (en) 2002-09-05 2004-03-11 Takashige Iwamura Information processing system having data migration device
US20040088483A1 (en) 2002-11-04 2004-05-06 Paresh Chatterjee Online RAID migration without non-volatile memory
US20040117546A1 (en) 2002-12-11 2004-06-17 Makio Mizuno iSCSI storage management method and management system
US20040172512A1 (en) 2003-02-28 2004-09-02 Masashi Nakanishi Method, apparatus, and computer readable medium for managing back-up
US20040225719A1 (en) 2003-05-07 2004-11-11 International Business Machines Corporation Distributed file serving architecture system with metadata storage virtualization and data access at the data server connection speed
US20050010688A1 (en) 2003-06-17 2005-01-13 Hitachi, Ltd. Management device for name of virtual port
US20050005062A1 (en) 2003-07-02 2005-01-06 Ling-Yi Liu Redundant external storage virtualization computer system

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
Banatre, M., "Hiding Distribution in Distributed Systems", Proceedings of the 13<SUP>th </SUP>International Conference on Software Engineering, 1991, pp. 189-196.
Knowles, Mike, "Survey of the Storage Evolution", Proceedings of 2003 User Group Conference, 2003, 6 pages.
Leach et al, "The Architecture of an Integrated Local Network", IEEE Journal on Selected Areas in Communications, vol. SAC-1, No. 5, Nov. 1983,pp. 842-857.
Leach et al, "The File System on an Integrated Local Network", Proceedings of the 1985 ACM Computer Science Conference, Mar. 1985, pp. 309-324.
Shrimpf, H., "Migration of Processes, Files, and Virtual Devices in the MDX Operating System", ACM SIGOPS Operating Systems Review, vol. 29, Issue 2, Apr. 1995, pp. 70-81.
Welch, Brent et al, "Prefix Tables: A Simple Mechanism for Locating Files in a Distributed System", Computer Science Division Report No. UCB/CSD 86/ Computer Sience Division Report No. UCB/CSD 86/261, Oct. 1985, 12 pages.

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100162044A1 (en) * 2004-08-09 2010-06-24 Siew Yong Sim-Tang Method for erasure coding data across a plurality of data stores in a network
US9122627B1 (en) * 2004-08-09 2015-09-01 Dell Software Inc. Method for lock-free clustered erasure coding and recovery of data across a plurality of data stores in a network
US8205139B1 (en) * 2004-08-09 2012-06-19 Quest Software, Inc. Method for lock-free clustered erasure coding and recovery of data across a plurality of data stores in a network
US8086937B2 (en) 2004-08-09 2011-12-27 Quest Software, Inc. Method for erasure coding data across a plurality of data stores in a network
US8051361B2 (en) * 2004-08-09 2011-11-01 Quest Software, Inc. Method for lock-free clustered erasure coding and recovery of data across a plurality of data stores in a network
US20100162076A1 (en) * 2004-08-09 2010-06-24 Siew Yong Sim-Tang Method for lock-free clustered erasure coding and recovery of data across a plurality of data stores in a network
US20060059188A1 (en) * 2004-09-16 2006-03-16 Tetsuya Shirogane Operation environment associating data migration method
US20060075400A1 (en) * 2004-09-28 2006-04-06 Marc Jarvis System and method for data migration integration with information handling system manufacture
US8458692B2 (en) * 2004-09-28 2013-06-04 Dell Products L.P. System and method for data migration integration with information handling system manufacture
US8107467B1 (en) * 2005-09-30 2012-01-31 Emc Corporation Full array non-disruptive failover
US8072987B1 (en) 2005-09-30 2011-12-06 Emc Corporation Full array non-disruptive data migration
US7697515B2 (en) * 2005-12-27 2010-04-13 Emc Corporation On-line data migration of a logical/virtual storage array
US7697554B1 (en) * 2005-12-27 2010-04-13 Emc Corporation On-line data migration of a logical/virtual storage array by replacing virtual names
US20070208836A1 (en) * 2005-12-27 2007-09-06 Emc Corporation Presentation of virtual arrays using n-port ID virtualization
US9348530B2 (en) 2005-12-27 2016-05-24 Emc Corporation Presentation of virtual arrays using n-port ID virtualization
US7685395B1 (en) 2005-12-27 2010-03-23 Emc Corporation Spanning virtual arrays across multiple physical storage arrays
US20070263637A1 (en) * 2005-12-27 2007-11-15 Emc Corporation On-line data migration of a logical/virtual storage array
US20090049441A1 (en) * 2006-02-10 2009-02-19 Mitsubishi Electric Corporation Remote Update System for Elevator Control Program
US8204970B2 (en) * 2006-02-10 2012-06-19 Mitsubishi Electric Corporation Remote update system for elevator control program
US8165137B2 (en) * 2006-02-27 2012-04-24 Alcatel Lucent Fast database migration
US20070201470A1 (en) * 2006-02-27 2007-08-30 Robert Martinez Fast database migration
US8539177B1 (en) 2006-06-29 2013-09-17 Emc Corporation Partitioning of a storage array into N-storage arrays using virtual array non-disruptive data migration
US8589504B1 (en) 2006-06-29 2013-11-19 Emc Corporation Full array non-disruptive management data migration
US8452928B1 (en) 2006-06-29 2013-05-28 Emc Corporation Virtual array non-disruptive migration of extended storage functionality
US7757059B1 (en) 2006-06-29 2010-07-13 Emc Corporation Virtual array non-disruptive management data migration
US8533408B1 (en) 2006-06-29 2013-09-10 Emc Corporation Consolidating N-storage arrays into one storage array using virtual array non-disruptive data migration
US8583861B1 (en) 2006-06-29 2013-11-12 Emc Corporation Presentation of management functionality of virtual arrays
US20080059744A1 (en) * 2006-08-29 2008-03-06 Hitachi, Ltd. Storage system, and data management and migration method
US7546433B2 (en) 2006-08-29 2009-06-09 Hitachi, Ltd. Storage system, and data management and migration method
US9063895B1 (en) 2007-06-29 2015-06-23 Emc Corporation System and method of non-disruptive data migration between heterogeneous storage arrays
US9063896B1 (en) 2007-06-29 2015-06-23 Emc Corporation System and method of non-disruptive data migration between virtual arrays of heterogeneous storage arrays
US9098211B1 (en) 2007-06-29 2015-08-04 Emc Corporation System and method of non-disruptive data migration between a full storage array and one or more virtual arrays
US8539180B2 (en) 2009-05-05 2013-09-17 Dell Products L.P. System and method for migration of data
US8122213B2 (en) 2009-05-05 2012-02-21 Dell Products L.P. System and method for migration of data
US20100287345A1 (en) * 2009-05-05 2010-11-11 Dell Products L.P. System and Method for Migration of Data
US11115735B2 (en) * 2017-05-30 2021-09-07 Commscope Technologies Llc Reconfigurable optical networks

Also Published As

Publication number Publication date
CN100409202C (en) 2008-08-06
EP1596275A3 (en) 2008-11-05
CN101290558B (en) 2011-04-06
CN1696913A (en) 2005-11-16
CN101290558A (en) 2008-10-22
EP1596275A2 (en) 2005-11-16
US20060020663A1 (en) 2006-01-26
US7912814B2 (en) 2011-03-22
US20060004876A1 (en) 2006-01-05

Similar Documents

Publication Publication Date Title
US7124143B2 (en) Data migration in storage system
US7472240B2 (en) Storage system with plural control device affiliations
US9639277B2 (en) Storage system with virtual volume having data arranged astride storage devices, and volume management method
US8103826B2 (en) Volume management for network-type storage devices
JP5057656B2 (en) Storage system and storage system operation method
JP5309043B2 (en) Storage system and method for duplicate data deletion in storage system
US7558916B2 (en) Storage system, data processing method and storage apparatus
JP4990322B2 (en) Data movement management device and information processing system
JP4718285B2 (en) Computer system having file management function, storage device, and file management method
JP5461216B2 (en) Method and apparatus for logical volume management
JP4852298B2 (en) Method for taking over information for identifying virtual volume and storage system using the method
US9122415B2 (en) Storage system using real data storage area dynamic allocation method
US20070079098A1 (en) Automatic allocation of volumes in storage area networks
WO2014141466A1 (en) Computer system
JP2015510296A (en) System, apparatus, and method for identifying stored data that can be accessed by a host entity and providing data management services
JP2010079626A (en) Load distribution method and system for computer system
JP2015532734A (en) Management system for managing physical storage system, method for determining resource migration destination of physical storage system, and storage medium
JP6005446B2 (en) Storage system, virtualization control device, information processing device, and storage system control method
JP5272185B2 (en) Computer system and storage system
JP2004355638A (en) Computer system and device assigning method therefor
US7546433B2 (en) Storage system, and data management and migration method
JP2007004710A (en) Storage access system, data transfer device, storage accessing method and program
US11693577B2 (en) Storage operation processing during data migration using migrated indicator from source storage
JP2003296154A (en) Volume integrated management method and integrated management system
JP2017058736A (en) Storage system, storage control apparatus, and access control method

Legal Events

Date Code Title Description
AS Assignment

Owner name: HITACHI , LTD., JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MATSUNAMI, NAOTO;SHIROGANE, TETSUYA;IWAMI, NAOKO;AND OTHERS;REEL/FRAME:016579/0917;SIGNING DATES FROM 20040624 TO 20040713

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FEPP Fee payment procedure

Free format text: PAYER NUMBER DE-ASSIGNED (ORIGINAL EVENT CODE: RMPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.)

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20181017