US20060187908A1 - Network system and its switches - Google Patents
Network system and its switches Download PDFInfo
- Publication number
- US20060187908A1 US20060187908A1 US11/407,167 US40716706A US2006187908A1 US 20060187908 A1 US20060187908 A1 US 20060187908A1 US 40716706 A US40716706 A US 40716706A US 2006187908 A1 US2006187908 A1 US 2006187908A1
- Authority
- US
- United States
- Prior art keywords
- storage device
- data
- switch
- read request
- computer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/18—File system types
- G06F16/182—Distributed file systems
- G06F16/1824—Distributed file systems implemented using Network-attached Storage [NAS] architecture
Definitions
- the present invention relates to switches placed in a network that connects a storage device with a computer.
- SAN storage area network
- examples of technologies that reduce the frequency of data transfer in a network so as to shorten time required for causing a computer to access data stored in a storage device include a network cache technology.
- a storage area (hereinafter referred to as a “cache device”) for temporarily storing data on a network is first prepared, and subsequently, if through the cache device the computer reads out data stored in the storage device, the read data is stored in the cache device, which then returns a response when the data is accessed thereafter. As a result, the access time for the data is shortened.
- a computer first accesses the metadata server when accessing a storage device.
- the metadata server notifies the computer of a location of data to be accessed. If a cache is used, the computer is notified of a location of a device having the cache.
- examples of technologies for placing a device having a cache on a network such as the Internet or WWW include a technology called transparent cache.
- a switch when a switch receives an access request for data, which has been issued from a computer to a storage device, the switch transmits the access request to a computer having a cache (hereinafter referred to as a “cache server”) as a first step. If a target file of the access request exists in the cache possessed by the cache server (hereinafter referred to as a “cache hit”), the cache server transmits the target file to the computer that has issued the access request.
- the cache server transmits an access request to the storage device to obtain the data, and then transmits the obtained data to the computer that has issued the access request.
- a read request is equivalent to an access request specifying a file name, etc.
- the cache server On receiving the access request, the cache server first reads out a file held in its own storage device, together with data called metadata which stores the association of the file with a corresponding block in the storage device, and then searches the read data for the file specified by the read request to judge whether or not a cache hit is encountered. Because this search processing judges the coincidence of a name or the like, processing of the block access protocol such as SCSI becomes more complicated than the comparison between numerical values of logical block addresses used to specify locations to be accessed.
- An object of the present invention is to speed up an access to data without changing settings of a computer in SAN so that a network bandwidth can be saved.
- a network system has the undermentioned configuration.
- a network system comprising: a computer; a switch that is connected to the computer; a first storage device that is connected to the switch via a network; and a second storage device that is connected to the switch via the network.
- the switch transfers data stored in the first storage device to the second storage device according to an instruction from outside. Then, on receiving from the computer an access request for the data stored in the first storage device, the switch converts the access request into an access request to the second storage device, and then transmits the converted access request to the second storage device. Next, after receiving data from the second storage device, the switch converts the received data into such data that can be recognized as data transmitted from the first storage device, and then transmits the converted data to the computer.
- a second computer connected to the switch may also give an instruction to the switch.
- the switch may also provide the computer with a virtual storage corresponding to the first storage device. In this case, the computer issues an access request to the virtual storage.
- the above-mentioned switch and the second storage device may also be integrated into one device.
- the switch instead of transferring beforehand data stored in the first storage device to the second storage device, the switch may also transfer the data stored in the first storage device to the second storage device in response to an access request from the computer. Further, in this case, the switch may have information about whether or not the data stored in the first storage device has been transferred to the second storage device, and transmit an access request to the first storage device or the second storage device according to the information. Furthermore, in this aspect, when the switch transfers data from the first storage device to the second storage device, the switch checks the amount of free storage capacity of the second storage device.
- the switch deletes some amount of data stored in the second storage device according to a predetermined criterion, e.g., according to the frequency of use by the computer, so as to transfer the data to an area where the data deletion is performed.
- the first storage device or the second storage device may also control the transmission of data.
- FIG. 1 is a diagram illustrating a configuration of a computer system according to a first embodiment of the present invention
- FIG. 2 is a diagram illustrating a configuration of a copy management switch according to the first embodiment
- FIG. 3 is a diagram illustrating a memory configuration of a copy management switch according to the first embodiment
- FIG. 4 is a diagram illustrating a configuration of a copy management table according to the first embodiment
- FIG. 5 is a flowchart illustrating address translation processing according to the first embodiment
- FIG. 6 is a diagram illustrating a configuration of a computer system according to a second embodiment of the present invention.
- FIG. 7 is a diagram illustrating a configuration of a proxy address table according to the second embodiment.
- FIG. 8 is a flowchart illustrating address translation processing according to the second embodiment
- FIG. 9 is a diagram illustrating a configuration of a computer system according to a third embodiment of the present invention.
- FIG. 10 is a diagram illustrating a configuration of a virtual address table according to the third embodiment.
- FIG. 11 is a diagram illustrating an example of a copy management table according to the third embodiment.
- FIG. 12 is a diagram illustrating a configuration example in which a plurality of copy management switches are provided in the third embodiment
- FIG. 13 is a diagram illustrating a configuration of a computer system according to a fourth embodiment of the present invention.
- FIG. 14 is a diagram illustrating a configuration of a computer system according to a fifth embodiment of the present invention.
- FIG. 15 is a diagram illustrating a configuration example of a copy management switch
- FIG. 16 is a diagram illustrating a configuration of a copy management table according to a sixth embodiment
- FIG. 17 is a diagram illustrating a configuration of a cache table according to the sixth embodiment.
- FIG. 18 is a flowchart illustrating address translation processing according to the sixth embodiment.
- FIG. 19 is a flowchart illustrating cache processing according to the sixth embodiment.
- FIG. 1 is a diagram illustrating a first embodiment of a computer system to which the present invention is applied.
- the computer system comprises a SAN 101 , a host 105 , a storage device 104 a , and a storage device 104 b .
- the host 105 , the storage device 104 a , and the storage device 104 b are interconnected over the SAN 101 .
- the SAN 101 comprises the host 105 , switches 102 a , 102 b , 102 c , and a copy management switch 103 described later.
- This embodiment which will be described below, is based on the assumption that the host 105 makes a read request for data (hereinafter referred to as “original data”) stored in the storage device 104 a.
- the host 105 is a computer comprising a CPU 1051 , a memory 1052 , and an interface 1053 used to make a connection to the SAN 101 .
- the storage device 104 comprises the following: a medium 1043 for storing data; an interface 1044 used to make a connection to the SAN 101 ; a CPU 1041 for executing a program used to respond to a request from the host 105 ; and a memory 1042 .
- a variety of examples could be conceivable as the medium 1043 included in the storage device 104 .
- a disk array constituted of a plurality of hard disk drives may also be adopted as the medium 1043 .
- the storage device 104 transmits to the host 105 data corresponding to the request, and then transmits a response notifying that the transmission is completed.
- the switches 102 a , 102 b , 102 c , and the copy management switch 103 mutually exchange connection information to create a routing table required for routing processing described below. To be more specific, they exchange information indicating a load (overhead) of communication between arbitrary two switches (hereinafter referred to as a “connection cost”). In general, the connection cost becomes larger with decrease in communication bandwidth of a communication line between the arbitrary two switches. However, an administrator or the like can also set a connection cost at a given value through a management terminal 106 described below. Each of the switches calculates, from all connection costs obtained, the sum of the connection costs for a path leading to each of the other switches, and thereby finds a path for which the sum of the connection costs is the lowest. The path is then stored in the routing table.
- a SAN domain address unique to each switch is assigned to each of the switches 102 a , 102 b , 102 c and the copy management switch 103 . Additionally, a unique SAN address is assigned to each of the storage devices 104 a , 104 b and the host 105 .
- the SAN address is an address constituted of: a SAN domain address of a switch in the SAN 101 , which is connected to a device (hereinafter referred to also as a “node”) such as a computer connected to the SAN 101 ; and a SAN node address unique to a group (hereinafter referred to as a “domain”) specified by the SAN domain address.
- each node When transmitting/receiving a frame to/from another node, each node specifies a source node and a destination node by adding a source SAN address and a destination SAN address to the frame.
- Each of the switches 102 a , 102 b , 102 c and the copy management switch 103 searches the routing table for a destination SAN domain address of a frame to route the frame.
- the frame is transferred to a node directly connecting to the switch 102 , which has a SAN node address that agrees with a destination SAN node address of the frame.
- a frame is a unit of data or an access request transmitted through a protocol used for the SAN 101 .
- This embodiment is based on the assumption that because of a low connection cost, a short distance, performance of the storage device 104 b higher than that of the storage device 104 a , or the like, the host 105 can access the storage device 104 b at higher speed as compared with a case where the host accesses the storage device 104 a.
- the management terminal 106 is connected to the copy management switch 103 .
- the user or administrator of the computer system instructs the copy management switch 103 to copy original data to the storage device 104 b by use of the management terminal 106 .
- the administrator enters, through the management terminal 106 , information indicating the association of a location of the original data with a location of the copied data (hereinafter referred to as “copy data”) in the copy management switch 103 , more specifically, in a copy management table 131 described below.
- the administrator or the like may also instruct, through the management terminal 106 , the copy management switch 103 to collect information about the frequency of accesses from the host 105 to the storage device 104 a , and the like, and then to copy to the storage device 104 b only areas, for which the access frequency is high, instead of the whole original data.
- a fiber channel switch has a table used to search for a unique SAN address of a port in the SAN 101 by use of a world wide name (hereinafter referred to as WWN) which is a unique and unchangeable value in the world and which is assigned to a node or a port.
- the WWN includes a WWPN (World Wide Port Name) that is unique to a port connected to the SAN 101 , and a WWNN (World Wide Node Name) that is unique to a node.
- a node having a plurality of connection ports are allowed to have one WWNN and a plurality of WWPNs.
- iSCSI name management system
- iSNS name management system
- FIG. 2 is a diagram illustrating a configuration of the copy management switch 103 .
- the copy management switch 103 comprises the following: a port 111 used to connect another node; a controller 112 ; a management port 113 used to connect the management terminal 106 ; an address translation unit 114 ; and a switch processing unit 115 that performs routing and switching.
- the switch processing unit 115 holds the routing table required for routing.
- the controller 112 comprises a CPU 1121 , a memory 1122 , and a non-volatile storage 1123 .
- the address translation unit 114 comprises a CPU 1141 and a memory 1142 . It is to be noted that because the configuration disclosed in this figure is merely a preferred embodiment, another configuration may also be applied so long as it can achieve equivalent functions.
- FIG. 3 is a diagram illustrating programs and data that are stored in the memories 1122 , 1142 and non-volatile storage 1123 of the copy management switch 103 .
- the initialization program 121 is a program that is executed by the CPU 1121 upon start-up of the copy management switch 103 .
- the CPU 1121 reads each of the other programs from the non-volatile storage 1123 into the memory 1122 and the memory 1142 , and also reads the copy management table 131 described below into the memory 1142 possessed by each address translation unit 114 .
- a management-terminal-submitted request processing program 122 , a routing protocol processing program 123 , and a name service processing program 124 are stored in the memory 1122 of the controller 112 .
- the CPU 1121 executes these programs.
- An address translation program 126 is stored in the memory 1142 of the address translation unit 114 , and is executed by the CPU 1141 .
- the CPU 1121 changes contents of the copy management table 131 according to a request submitted from the management terminal 106 , which is received through the management port 113 . Additionally, by executing the management-terminal-submitted request processing program 122 , the CPU 1121 executes copy of data according to the request from the management terminal 106 .
- the CPU 1121 can perform management applied to general switches.
- examples of protocols used for the management port 113 include TCP/IP.
- another protocol may also be used so long as it is possible to communicate with the management terminal 106 by the protocol.
- connection information information about connections in the SAN 101 (hereinafter referred to as “connection information”) with another switch 102 to create a routing table, and then stores the created routing table in the memory possessed by the switch processing unit 115 .
- the CPU 1121 By executing the name service processing program 124 , the CPU 1121 writes to the name database 125 information about nodes connected to the copy management switch 103 , whereas the CPU 1121 responds to a search request from the node. For the purpose of receiving from the node the request for searching the name database 125 , a SAN address is allocated to the controller 112 .
- the CPU 1141 of the address translation unit 114 During the execution of the address translation program 126 by the CPU 1141 of the address translation unit 114 , as soon as the port 111 receives a frame, the CPU 1141 translates destination and source SAN addresses of a read request, read data, or the like, according to information stored in the copy management table 131 . Details in the address translation processing will be described later. It is to be noted that although the address translation processing is executed on the basis of the program in this embodiment, dedicated hardware may also perform the address translation processing.
- FIG. 4 is a diagram illustrating how the copy management table 131 is configured.
- the copy management table 131 has a plurality of copy management entries 132 .
- Each of the copy management entries 132 holds information about the association of original data with copy data.
- the copy management entry 132 comprises the following: a field 133 for storing an original SAN address indicating the storage device 104 that stores the original data; a field 134 for storing a number indicating a logical unit in the storage device 104 that stores the original data (hereinafter referred to as a “original LUN”); a field 135 for storing a logical block address indicating a starting location of the original data in the logical unit (hereinafter referred to as a “original LBA”); a field 136 for storing an original length indicating a size of the original data; a field 137 for storing a copy SAN address indicating the storage device 104 that stores the copy data corresponding to the original data stored in the copy management entry 132 ;
- the copy management switch 103 first transfers original data stored in the storage device 104 a to the storage device 104 b to create copy data therein.
- information indicating the association of the storage device 104 a storing the original data with the storage device 104 b storing the copy data is written to the copy management table 131 .
- the copy management switch 103 judges whether or not the information indicating the association of the original data with the copy data includes the address information held in a frame which includes the read request, and thereby determines whether or not data corresponding to the read request is the original data, and whether or not there exists the copy data corresponding to the original data.
- the copy management switch 103 converts the read request for the original data stored in the storage device 104 a , which has been received from the host 105 , to a read request to the storage device 104 b that stores the copy data.
- the copy management switch 103 changes a SAN address indicating a request destination (storage device 104 a ), which is included in the read request, to a SAN address of the storage device 104 b that stores the copy data. This enables effective use of the network.
- the user or administrator of the system uses the management terminal 106 to transmit the following information to the copy management switch 103 : a SAN address of the storage device 104 a ; a logical unit number (hereinafter referred to as “LUN”) that is an address of original data in the storage device 104 a ; a logical block address (hereinafter referred to as “LBA”) a length of the original data; a SAN address of the storage device 104 b ; and a data copy destination's LUN and LBA.
- LUN logical unit number
- LBA logical block address
- the controller 112 which has received the information transmits the read request for the original data to the storage device 104 a.
- the controller 112 stores in the memory 1122 the read data that has been transmitted from the storage device 104 a . Subsequently, the controller 112 transmits a write request to the storage device 104 b , and thereby writes the original data stored in the memory 1122 to the storage device 104 b . The data is copied through the above-mentioned processing.
- the copy operation described above is merely an example of the copy processing. Therefore, how to copy the data is not limited to this method.
- the controller 112 may also be provided with a dedicated buffer for storing read data instead of the memory 1122 .
- the storage device 104 a itself may also perform the copy processing.
- the SCSI protocol standard contains a command of EXTENDED COPY.
- the storage device that can handle this command copies a specific area in the storage device to a specific area in another storage device according to what the command specifies.
- the storage device 104 a holding the original data can handle the EXTENDED COPY command, it is possible to perform the copy processing also in the manner described below.
- the controller 112 transmits the EXTENDED COPY command for copy processing to the storage device 104 a that holds the original data.
- the storage device 104 a transfers to the storage device 104 b contents of a storage area specified by the EXTENDED COPY command. After the transfer of the data stored in the specified storage area ends, the storage device 104 a transmits a response indicating the end of the processing to a source of the EXTENDED COPY command (in this case, to the copy management switch 103 ) before the copy processing is completed.
- the copy management switch 103 which has received the frame from the host 105 translates a source address, or a destination address, of the frame by the address translation unit 114 , and then transmits the frame to an appropriate device.
- FIG. 5 is a flowchart illustrating an example of how the address translation unit 114 executes address translation processing.
- the CPU 1141 starts execution of the address translation program 126 (step 151 ). Then, the CPU 1141 judges whether or not the frame received by the port 111 is a frame containing the read request to the storage device 104 which has been issued by the host 105 (step 152 ).
- the CPU 1141 judges whether or not a copy of data requested by the read request exists in the computer system. To be more specific, the CPU 1141 judges whether or not the copy management table 131 has a copy management entry 132 satisfying a condition that an area indicated by the information stored in the field 133 , the field 134 , the field 135 , and the field 136 , which are included in the copy management entry 132 , includes an area indicated by a destination SAN address, a LUN, a LBA, and a length which are stored in the frame containing the read request (step 153 ).
- the CPU 1141 converts, by use of the copy management entry 132 , the frame containing the read request into a frame containing a read request for the storage device 104 in which the copy data is stored (step 154 ). To be more specific, the CPU 1141 changes a destination of the frame containing the read request to the value stored in the field 137 , changes the LUN to the value stored in the field 138 , and changes the LBA to a value determined by (LBA+the value stored in the field 139 ⁇ a value stored in the field 135 ).
- the CPU 1141 judges whether the contents of the frame are intended for data (hereinafter referred to as “read data” that is transmitted from the storage device 104 according to the read request or a response (step 155 ). If the frame is intended for read data or a response, the CPU 1141 judges whether or not the read data or the response is transmitted from the storage device 104 b that stores the copy data. To be more specific, the CPU 1141 judges whether or not the copy management table 131 has the copy management entry 132 , a frame source of which agrees with a value stored in the field 137 (step 156 ).
- the CPU 1141 refers to the copy management entry 132 that has been found in step 156 to change the frame source to an original SAN address stored in the field 133 (step 157 ).
- the CPU 1141 transmits the frame completing the processing to the switch processing unit 115 .
- the CPU 1141 transmits to the switch processing unit 115 the received frame just as it is (step 158 ).
- the user or administrator of the system uses the management terminal 106 to instruct the copy management switch 103 to perform data copy and prepare the copy management table 131 .
- FIG. 4 illustrates an example as follows: a SAN domain address of the switch 102 c is 4; a SAN node address of the storage device 104 a is 01; a SAN domain address of the copy management switch 103 is 5; a SAN node address of the storage device 104 b is 02; original data having a length of 100000 is stored in LUN 0 of the storage device 104 a starting from LBA 0; and the whole original data is copied to an area starting from LBA 50000 of LUN 5 of the storage device 104 b .
- information indicating the association of the original data with the copy data is stored in each field of the copy management entry 132 .
- the host 105 issues a read request to the storage device 104 a .
- a frame corresponding to the read request includes the SAN address, LUN, LBA, and length of the storage device 104 a as a destination, and the SAN address of the host 105 as a source.
- the read request is routed according to the SAN address of the destination, and consequently arrives at the copy management switch 103 via the switch 102 a .
- the copy management switch 103 which has received the read request checks contents of the frame against the information in the copy management table 131 . If there exists copy data corresponding to the read request, the copy management switch 103 converts the read request into a read request for the copy data, and then routes the converted frame to the storage device 104 b.
- the storage device 104 b that has received the read request reads the copy data as a target of the read request, and then transmits the read data to the host 105 as a source of the read request.
- a frame of the read data includes a SAN address of the host 105 as a destination, and a SAN address of the storage device 104 b as a source.
- the copy management switch 103 which has received the read data changes the source of the read data to the storage device 104 a according to the information of the copy management table 131 . After that, the read data is routed according to the SAN address of the destination, and consequently arrives at the host 105 via the switch 102 a . The host 105 receives the read data as if it were transmitted from the storage device 104 a.
- the read request to the storage device 104 a is actually handled by the storage device 104 b capable of accessing at higher speed in the SAN 101 . Accordingly, response speed becomes higher, making it possible to reduce loads on the switches 102 b , 102 c , and the storage device 104 a.
- FIG. 6 is a diagram illustrating a second embodiment of a computer system to which the present invention is applied.
- a point of difference between the first and second embodiments is that a plurality of the copy management switches 203 are provided in the second embodiment. It is to be noted that because other configurations are similar to those in the first embodiment, detailed description thereof will be omitted.
- this embodiment is based on the assumptions that the shortest route from the host 105 to the storage device 104 a is the host 105 ⁇ the switch 102 a the copy management switch 203 a ⁇ the switch 102 b ⁇ the storage device 104 a , and that the shortest route from the host 105 to the storage device 104 b is the host 105 ⁇ the switch 102 a ⁇ the copy management switch 203 b ⁇ the storage device 104 b . Moreover, this embodiment is also based on the assumption that a connection cost from the host 105 to the storage device 104 a is higher than that from the host 105 to the storage device 104 b.
- the administrator or user of the system uses the management terminal 106 , which is connected to the copy management switches 203 a , 203 b , to instruct the copy management switch 203 b to copy original data held in the storage device 104 a to the storage device 104 b , and then writes information indicating the association of the original data with copy data to the copy management table 231 possessed by the copy management switches 203 a , 203 b.
- a proxy address table 241 as well as the programs described in the first embodiment is stored in the memory 1122 possessed by the copy management switch 203 .
- contents of the address translation program 226 executed by the CPU 1141 also differ from those described in the first embodiment.
- FIG. 11 is a diagram illustrating an example of the copy management table 231 .
- the copy management entry 232 of the copy management table 231 has a field 240 for storing a local flag.
- the local flag is a flag indicating the relationship of connection between the storage device 104 corresponding to the original SAN address 133 of the copy management entry 232 possessed by the copy management switch 203 and each of the plurality of copy management switches 203 including this copy management switch 203 .
- each value is set in accordance with the number of devices existing between each of the plurality of copy management switches 203 and the storage device 104 .
- a state in which the number of devices is small is expressed as “near”.
- the copy management switch 203 b is connectedly disposed at a position nearer to the storage device 104 b than the copy management switch 203 a . Therefore, the management terminal 106 stores in the copy management table 231 of the copy management switch 203 a the copy management entry 232 in which a local flag is 0, and stores in the copy management table 231 of the copy management switch 203 b the copy management entry 232 in which a local flag is 1.
- FIG. 7 is a diagram illustrating a configuration of the proxy address table 241 .
- the proxy address table 241 is a table for storing the association among a SAN address that points to the host 105 requesting data (hereinafter referred to as a “host SAN address”), an original SAN address, and a proxy address used for address translation processing in this embodiment (hereinafter referred to as a “proxy SAN address”).
- the proxy address table 241 has a plurality of proxy address entries 242 .
- Each of the proxy address entries 242 comprises a field 243 for storing a host SAN address, a field 244 for storing an original SAN address, and a field 245 for storing a proxy SAN address. How to use the proxy address table 241 will be described later.
- FIG. 8 is a flowchart illustrating an example of address translation processing by the CPU 1141 according to this embodiment.
- an address translation program executed in the copy management switch 203 is called an address translation program 226 .
- the CPU 1141 starts execution of the address translation program 226 (step 251 ).
- the CPU 1141 judges whether or not the received frame is intended for a read request (step 252 ). If the received frame is intended for a read request, the CPU 1141 judges whether or not a copy of data requested by the read request exists in the computer system.
- the CPU 1141 judges whether or not the copy management table 231 has a copy management entry 232 satisfying a condition that an area indicated by the information stored in the field 133 , the field 134 , the field 135 , and the field 136 , which are included in the copy management entry 232 , includes an area indicated by a destination SAN address, a LUN, a LBA, and a length which are stored in the frame containing the read request (step 253 ).
- the CPU 1141 uses the copy management entry 232 , which has been found in step 253 , to convert the read request into a read request to the storage device 104 that stores copy data.
- a destination of the frame containing the read request is changed to a copy SAN address stored in the field 137 ;
- a LUN of the frame is changed to a copy LUN stored in the field 138 ;
- a LBA of the frame is changed to a value determined by (a LBA+a value of a copy LBA stored in the field 139 ⁇ a value of an original LBA stored in the field 135 ) (step 254 ).
- the CPU 1141 judges whether or not the read request is issued to the storage device 104 that is connected to the copy management switch 203 to which the CPU 1141 belongs. More specifically, to begin with, a judgment is made as to whether or not an area indicated by a SAN address, a LUN, a LBA, and a length, of a destination of the read request is included in an area indicated by the copy SAN address 137 , the copy LUN 138 , the copy LBA 139 , and the original length 136 of a copy management entry 232 in the copy management table 231 . A further judgment is then made as to whether or not the pertinent copy management entry 232 has a value of 1 in the local flag 240 (step 255 ).
- the CPU 1141 changes a source of the frame containing the read request.
- the reason for the change is to differentiate between a read request for the copy data and a read request for data other than the copy data held by the storage device 104 .
- the CPU 1141 first generates a proxy SAN address.
- the proxy SAN address is determined such that it includes a SAN domain address assigned to the copy management switch 203 , and that it does not overlap SAN addresses of the other nodes and also does not overlap a proxy SAN address stored in the field 245 of the proxy address entry 242 held in the proxy address table 241 .
- the CPU 1141 writes, to the proxy address entry 242 that is not used in the proxy address table 241 , the association with a host SAN address corresponding to the host 105 that has issued the read request, an original SAN address, and a proxy address.
- the CPU 1141 stores the host SAN address indicating the host 105 as a source of the read request in the field 243 , stores the original SAN address 133 in the field 244 , and stores the generated proxy SAN address in the field 245 .
- the CPU 1141 then changes a source of the frame, which is the read request, to the generated proxy address (step 256 ).
- step 252 the CPU 1141 judges whether or not the received frame is intended for read data (step 257 ). If the received frame is intended for read data, the CPU 1141 judges whether or not a destination of the read data is a proxy SAN address generated by the copy management switch 203 . More specifically, the CPU 1141 judges whether or not the proxy address table 241 includes the proxy address entry 242 in which a SAN address pointing to a destination of the read data agrees with a proxy SAN address stored in the field 245 (step 258 ).
- the CPU 1141 uses information stored in the proxy address entry 242 , which has been found in step 258 , to change the source of the frame to an original SAN address stored in the field 244 , and also to change the destination to the host SAN address stored in the field 243 (step 259 ).
- the CPU 1141 judges whether or not the received frame is intended for a response (step 260 ). If the frame is intended for a response, the CPU 1141 judges whether or not a destination of the frame is a node indicated by the proxy SAN address generated by the copy management switch 203 . More specifically, the CPU 1141 judges whether or not the proxy address table 241 includes the proxy address entry 242 in which a SAN address pointing to a destination of the frame agrees with the proxy SAN address stored in the field 245 (step 261 ).
- the CPU 1141 uses information stored in the found proxy address entry 242 to change the source of the frame to an original SAN address stored in the field 244 , and also to change the destination to the host SAN address stored in the field 243 . In addition, the CPU 1141 deletes the proxy address entry 242 from the proxy address table 241 (step 262 ).
- step 255 If the destination of the frame does not satisfy the condition shown in step 255 , if the destination of the frame is not judged to be the proxy SAN address in step 258 or 261 , or if address translation of the frame is completed in step 256 , 259 , or 262 , the CPU 1141 transmits the frame completing the processing to the switch processing unit 115 (step 263 ).
- the user or administrator of the system uses the management terminal 106 to instruct the copy management switch 103 to perform data copy and store the information in the copy management table 231 of each copy management switch.
- the host 105 issues a read request to the storage device 104 a .
- the read request includes a SAN address, a LUN, a LBA, a length, of the storage device 104 a as a destination, and a SAN address of the host 105 as a source.
- the read request is routed according to the SAN address of the destination, and thereby arrives at the copy management switch 203 a via the switch 102 a.
- the copy management switch 203 a On receiving the read request, the copy management switch 203 a checks the read request against information in the copy management table 231 .
- the copy management switch 203 a converts a destination of the read request into a read request for copy data held in the storage device 104 b , and then routes the converted read request. However, because the storage device 104 b is not connected to the copy management switch 203 a , a proxy address is not generated, nor is the source changed.
- the modified read request is routed according to a SAN address of the destination, and consequently arrives at the copy management switch 203 b .
- the copy management switch 203 b which has received the read request checks contents of the received frame against information in the copy management table 231 .
- the copy management switch 203 b generates a proxy SAN address, and then writes the association among a SAN address of the host 105 , a SAN address of the storage device 104 a as an original, and the proxy SAN address to the proxy address entry 242 of the proxy address table 241 .
- the copy management switch 203 b changes a source of the read request to the proxy SAN address before routing the frame.
- the storage device 104 b which has received the read request reads out copy data corresponding to the read request, and after changing the destination to the proxy SAN address, the storage device 104 b transmits the read data to the copy management switch 203 b .
- the frame containing the read data includes the proxy SAN address as a destination, and a SAN address of the storage device 104 b as a source.
- the copy management switch 203 b changes a source of the read data to a SAN address of the storage device 104 a , and also changes a destination to a SAN address of the host 105 , on the basis of information in the proxy address table 241 .
- the frame containing the read data is routed according to the SAN address of the destination, and consequently arrives at the host 105 via the switch 102 a .
- the host 105 receives the read data as if it were transmitted from the storage device 104 a.
- the copy management switch 203 b changes a source to a SAN address of the storage device 104 a , and also changes a destination to a SAN address of the host 105 , before routing the response.
- the copy management switch 203 b deletes from the proxy address table 241 the proxy address entry 242 that stores the association.
- the host 105 receives the response as if it were transmitted from the storage device 104 a.
- the copy management switch 203 a exists on a path from the host 105 to the storage device 104 a holding the original data.
- the storage device 104 b holding the copy data is not connected to the copy management switch 203 a .
- the read request is converted into a read request to the storage device 104 b holding the copy data, and subsequently arrives at the copy management switch 203 b to which the storage device 104 b holding the copy data is connected.
- the copy data is transmitted to the host 105 as read data from the storage device 104 a.
- a proxy SAN address makes it possible to differentiate a read request changed by the copy management switch 203 from a command such as a read request issued to the storage device 104 b holding copy data.
- the storage device 104 b having the copy data can be used as a usual storage device 104 b .
- each read request uses a unique proxy SAN address, it becomes possible to copy the original data held in a plurality of storage devices 104 to one storage device 104 b , and then to use the copied data as copy data.
- the proxy SAN address is used to classify read data from the storage device 104 into read data whose address information is required to be translated, and read data whose address information is not required to be translated.
- information that enables recognition of the association among a read request, read data, and a response is added to a frame, it is also possible to classify the read data by the additional information without using the proxy SAN address.
- an ID called an exchange ID is added to each frame. Accordingly, it is also possible to classify the read data and the response according to this information.
- FIG. 9 is a diagram illustrating a configuration example of a computer system to which a third embodiment according to the present invention is applied.
- a SAN 101 comprises switches 102 a , 102 b and a copy management switch 303 .
- a host 105 a storage device 104 a , and a storage device 104 b are connected to the SAN 101 .
- Original data is stored in the storage device 104 a.
- this embodiment also based on the assumption that a connection cost for a communication line between the host 105 and the storage device 104 a is higher than that for a communication line between the host 105 and the storage device 104 b.
- the user, or the administrator, of the system uses the management terminal 106 , which is connected to the copy management switch 303 , to copy original data, and then to write information about the association of the original data with the copy data to the copy management table 231 of the copy management switch 303 for the purpose of managing the information.
- a virtual address table 341 which will be described below, is also stored and managed.
- the copy management switch 303 works for a device connected to the copy management switch 303 as if there were a virtual storage device 104 (hereinafter referred to as a “virtual storage 307 ”).
- the host 105 thereafter judges that the original data is stored in the virtual storage 307 , and thereby issues a read request to the virtual storage 307 .
- the reason why the read request is issued to the virtual storage 307 is as follows: because copy management entries 232 a and 232 b described below are stored in the copy management switch 303 through the management terminal 106 , the copy management switch 303 changes a read request to the virtual storage 307 into a read request to the storage device 104 a or the storage device 104 b depending on the presence or absence of copy data. This enables effective use of the network.
- a WWN of the virtual storage 307 is applied to the host 105 .
- the host 105 uses a name service to know two SAN addresses of the virtual storage 307 judging from the WWN of the virtual storage 307 .
- a configuration of the copy management switch 303 in this embodiment is the same as the copy management switch 103 in the first embodiment.
- the information, etc. stored in the memory of the copy management switch 303 are concerned, there are points of difference from the second embodiment as below.
- a first point of difference is that the CPU 1121 executes an initialization program 321 (the initialization program 121 used for the copy management switch 303 ) to read the virtual address table 341 described below from the non-volatile storage 1123 , and then to write to the name database 125 the address information of the virtual storage 307 stored in the virtual address table 341 .
- a second point of difference is that the CPU 1121 executes a management-terminal-submitted request processing program 322 (the request processing program 122 used for the copy management switch 303 ) not only to perform the processing in the second embodiment, but also to change the virtual address table 341 held in the non-volatile storage 1123 in response to a request that comes from the management terminal 106 and is received by the management port 113 .
- FIG. 10 is a diagram illustrating contents of the virtual address table 341 .
- the virtual address table 341 comprises a plurality of virtual address entries. 342 .
- the virtual address entry 342 corresponds to one virtual node, for instance, the virtual storage 307 .
- a SAN address of the virtual node hereinafter referred to as a “virtual SAN address”
- a virtual WWN hereinafter referred to as a “virtual WWPN” and a “virtual WWNN”
- entries 343 , 344 , and 345 of the virtual address entry 342 are written to entries 343 , 344 , and 345 of the virtual address entry 342 .
- copy management table 231 and the proxy address table 241 used in this embodiment are the same as those used in the second embodiment.
- the address translation program 226 is also the same as that used in the second embodiment.
- FIGS. 10 and 11 illustrate values set in the copy management table 231 , the copy management entries 232 a , 232 b , the virtual address table 341 , and the virtual address entry 342 .
- a SAN domain address of the switch 102 a is 4; a SAN domain address of the copy management switch 303 is 5; a SAN address of the storage device 104 a is 401 ; a SAN address of the storage device 104 b is 501 ; a SAN address assigned to the virtual storage device 307 is 502; a WWPN is 1234; and a WWNN is 5678.
- original data is stored in an area having a length of 100000 and starting from LUN 0 and LBA 0 of the storage device 104 a , and then part of the original data, starting from the top of the original data and having a length of 50000, is copied to an area starting from LUN 0 and LBA 0 of the storage device 104 b .
- the copy management entry 232 a shows that a read request to read an area having a length of 50000 and starting from LUN 0 and LBA 0 of the virtual storage 307 is converted into a read request to the storage device 104 b .
- the copy management entry 232 b shows that a read request to read an area having a length of 50000 and starting from LUN 0, LBA 50000 of the virtual storage 307 is converted into a read request to the storage device 104 a.
- a frame of the read request includes a SAN address, LUN, LBA, a length, of the storage device 307 as a destination, and a SAN address of the host 105 as a source.
- the frame of the read request is routed according to the SAN address of the destination, and consequently arrives at the copy management switch 303 via the switch 102 a.
- the copy management switch 303 that has received the read request checks information included in the frame of the read request against information in the copy management table 231 . As a result of the check, if the read request matches the copy management entry 232 a , as is the case with the second embodiment, the copy management switch 303 changes a destination of the read request, and thereby converts the read request into a read request to the storage device 104 b having copy data. Further, the copy management switch 303 generates a proxy address to change a source to the proxy address, and then transmits to the storage device 104 b a frame containing the read request.
- the copy management switch 303 b writes the association among the host 105 , the virtual storage 307 , and the proxy address to the proxy address entry 242 of the proxy address table 241 . Moreover, if the read request matches the copy management entry 232 b as a result of the check, the read request is converted into a read request for the original data stored in the storage device 104 a , and similar processing is performed thereafter.
- the storage device 104 b that has received the read request reads out specified data, and then sets the proxy SAN address as a destination before transmitting the read data.
- a frame of the read data includes the proxy SAN address indicating the destination, and a SAN address of the storage device 104 b as a source.
- the copy management switch 303 On receiving the frame of the read data, the copy management switch 303 changes a source of the frame of the read data to a SAN address of the virtual storage 307 , and also changes a destination to a SAN address of the host 105 , on the basis of information in the proxy address table 241 .
- the read data is routed according to the SAN address of the destination, and consequently arrives at the host 105 via the switch 102 a .
- the host 105 receives the read data as if it were transmitted from the virtual storage 307 .
- the copy management switch 103 and the like exist on a network path (hereinafter referred to as a “path”) between the host 105 and the storage device 104 a holding the original data, and the copy management switch 103 or the like changes the frame of the read request.
- a read request directly arrives at the copy management switch 303 that does not exists on the path between the host 105 and the storage device 104 a.
- the present embodiment can employ another configuration as shown in FIG. 12 .
- a storage device 104 a is connected to a copy management switch 303 a .
- the administrator or the like uses a management terminal 106 to set a copy management table 231 and a virtual address table 341 of the copy management switches 303 a , 303 b so that each copy management switch provides a virtual storage 307 .
- the virtual address entry 342 whose WWNN is equivalent is written to the virtual address table 341 of the copy management switches 303 a , 303 b so that when the hosts 105 a , 105 b refer to a name database of each of the copy management switches 303 a , 303 b , the virtual storage 307 is recognized as a node having a plurality of ports.
- the storage device 104 to be accessed is set by the WWNN.
- the hosts 105 a , 105 b refer to the name database, and thereby obtain two SAN addresses of the storage device 104 to be accessed (in actuality, the virtual storage 307 ).
- the host 105 can access the virtual storage 307 by use of any of the SAN addresses. Examples of methods for selecting one port from among a plurality of ports pointed by the plurality of SAN addresses could conceivably include two methods as described below.
- the other is that if the host 105 cannot obtain topology information, the host 105 transmits a read request to both ports, and then a port which can make a faster access is selected from the both ports.
- the copy management switch 303 can detect a failure of the storage device 104 .
- the occurrence of a physical disconnection can be detected by extinction of light at the port 111 .
- the copy management switch 303 can also detect a failure of the storage device 104 by monitoring contents of the response at the port 111 .
- An example of fail-safe will be described below.
- the copy management switch 303 that has detected a failure of the storage device 104 notifies the management terminal 106 of the occurrence of the failure.
- the user or the like then uses the copy management switch 303 to set again the copy management table 231 of the copy management switch 303 .
- the user uses the management terminal 106 to delete the copy management entry 232 a of the copy management switch 303 , then to set the original LBA 135 b and copy LBA 139 b of the copy management entry 232 b at 0, and further, to set the original length 136 b at 100000.
- the copy management switch 303 routes all read requests, which are issued from the host 105 to the virtual storage 307 , to the storage device 104 a . Similar failover processing can be performed also in the first and second embodiments.
- FIG. 13 is a diagram illustrating a configuration example of a computer system to which a fourth embodiment according to the present invention is applied. This embodiment is different from the third embodiment in that copy management switches 403 a , 403 b provide a virtual switch 408 .
- a configuration of each of the copy management switch 403 a , 403 b in this embodiment is similar to that of the copy management switch 303 in the third embodiment. However, information and the like stored in the memory 1122 possessed by each of the copy management switches 403 a , 403 b differ from those in the third embodiment in the following points:
- an entry 446 for storing a virtual domain address is added to the virtual address table 441 (virtual address table 341 used for the copy management switch 403 ).
- a virtual domain address stored in the entry 446 indicates a SAN domain address of the virtual switch 408 .
- the CPU 1121 executes the routing protocol processing program 423 (the routing protocol processing program 123 used for the copy management switch 403 ).
- the CPU 1121 then exchanges, with another switch, information about being connected to the virtual switch 408 having a SAN domain address specified by a virtual domain address stored in the entry 446 , and thereby creates a routing table.
- a connection cost between the copy management switch 403 a and the virtual switch 408 and a connection cost between the copy management switch 403 b and the virtual switch 408 are sets so that they are equivalent to each other.
- the user or administrator of the system uses the management terminal 106 connected to the copy management switches 403 a , 403 b to issue to the copy management switch 403 a (or 403 b ) an instruction to copy original data stored in the storage device 104 a to the storage device 104 b .
- the administrator or the like sets information in the virtual address table 441 and the copy management table 231 that are provided in each of the copy management switches 403 a , 403 b.
- a SAN domain address of the copy management switch 403 a is 4, a SAN domain address of the copy management switch 403 b is 5, a SAN domain address of the virtual switch 408 is 8, a SAN address of the storage device 104 a is 401, a SAN address of the storage device 104 b is 501, a SAN address assigned to the virtual storage 307 is 801, a WWPN is 1234 , and a WWNN is 5678, original data is stored in an area having a length of 100000 starting from LUN 0 and LBA 0 of the storage device 104 a , and then part of the original data, starting from the top of the original data and having a length of 50000, is copied to an area starting from LUN 0 and LBA 0 of the storage device 104 b.
- a connection cost between the host 105 and each of the copy management switches 403 a , 403 b is assumed to be as follows: for the host 105 a , a connection cost between the host 105 a and the copy management switch 403 b is lower than a connection cost between the host 105 a and the copy management switch 403 a . For the host 105 b , a connection cost between the host 105 b and the copy management switch 403 a is lower than a connection cost between the host 105 b and the copy management switch 403 b.
- the read request arrives at the copy management switch 403 b as a result of the routing that can achieve the lowest connection cost. After that, processing which is the same as that in the third embodiment is performed.
- the read request arrives at the copy management switch 403 a , and then processing which is the same as that in the third embodiment is performed.
- the host 105 can transmit a read request to the copy management switch 403 whose connection cost is low without selecting a path as performed in the configuration in FIG. 12 .
- This reason is that since a connection cost between the copy management switch 403 a and the virtual switch 408 is equivalent to a connection cost between the copy management switch 403 b and the virtual switch 408 , a frame which is transmitted from the host 105 to a SAN domain of the virtual switch 408 arrives at the copy management switch 403 a or 403 b that is closest to the host 105 .
- FIG. 14 is a diagram illustrating a configuration example of a computer system to which a fifth embodiment according to the present invention is applied. This embodiment is different from the first embodiment in that a copy management switch 503 has storage devices 104 c , 104 d.
- the storage device 104 a holds original data, and the host 105 issues a read request for the original data. Additionally, in this embodiment, the copy management switch 503 holds copy data. On receipt of the read request for the original data, which is issued from the host 105 , the copy management switch 503 reads out the copy data held in the storage devices 104 c , 104 d , and then transmits the read data to the host 105 .
- FIG. 15 is a diagram illustrating a specific example of an internal configuration of the copy management switch 503 .
- the copy management switch 503 comprises the following: a protocol converter 5032 including a plurality of ports 5031 and a port processor 5033 ; a disk controller 5035 ; a hard disk 5036 ; a management unit 5037 ; and a switch unit 5034 for connecting these components.
- the port processor 5033 includes a CPU and a memory; and the management unit 5037 includes a CPU, a memory, and a storage device.
- the disk controller 5035 provides another device with the storage capacity, which is obtained from a plurality of hard disks connected to the disk controller 5035 , as one or a plurality of logical storage areas.
- the copy management switch 503 gathers the logical storage areas, which are provided by the plurality of disk controllers 5035 included in the copy management switch 503 itself, into one or a plurality of virtual storage areas, and then provides the virtual storage area or areas to a device connected to the copy management switch 503 .
- the copy management switch 503 transmits/receives a command and data to/from other nodes through the ports 5031 .
- the protocol converter 5032 which has received the command, and the like, through the port 5031 , converts a protocol used for the command and the data which have been received, and then transmits them to the switch unit 5034 .
- the protocol converter 5032 judges whether or not the received command is targeted for the virtual storage area provided by the copy management switch 503 . If the received command is targeted for its own storage area, the protocol converter 5032 issues a command to the disk controller 5035 corresponding to the storage area. If the received command is not targeted at its own storage area, the protocol converter 5032 transmits a received frame to the switch unit 5034 just as it is.
- Information indicating the association of the virtual storage area provided by the copy management switch 503 with the logical storage area provided by the disk controller 5035 is stored in the storage device of the management unit 5037 .
- the CPU of the management unit 5037 stores this information in a memory possessed by the port processor 5033 .
- the management unit 5037 holds a name database in a memory inside the management unit 5037 , and thereby executes, in the CPU inside, processing that responds to an inquiry about WWNN and the like.
- the switch unit 5034 performs routing according to address information of the frame.
- the disk controller 5035 holds, in an internal memory, information about the association between the logical storage area to be provided and the storage area included in each hard disk 5036 connected. On receipt of a command from the switch unit 5034 (here, the command is stored in the frame), the disk controller 5035 determines a hard disk 5036 corresponding to a storage area specified by the command, and also a stored location in the hard disk 5036 , using the information about the association held in the memory. Then, the disk controller 5035 issues a data read command and the like to the corresponding hard disk 5036 so that the command is handled.
- the reason why the switch unit 5034 , the disk controller 5035 , and the management unit 5037 are duplicated in FIG. 15 is to achieve redundancy so that reliability is improved. Hence, this configuration is not always required for the present invention.
- the address translation program 126 is stored in a memory possessed by the port processor 5033 of the protocol converter 5032 . If the port 5031 of the protocol converter 5032 receives a frame, or if the switch unit 5034 receives a frame, the address translation program 126 is executed in the port processor 5033 a . In addition, a memory in the management unit 5037 stores the programs that are stored in the memory 1122 described in the first embodiment.
- the host 105 issues a read request for original data held in the storage device 104 a.
- the protocol converter 5032 of the copy management switch 503 On receiving the read request, the protocol converter 5032 of the copy management switch 503 first executes the address translation program 126 in the port processor 5033 , and then converts a received frame containing the read request into a frame containing a read request for a storage area that stores copy data.
- the protocol converter 5032 then checks contents of the read request converted.
- the read request is targeted for the storage area provided by the copy management switch 503 . Accordingly, the protocol converter 5032 transmits the converted frame containing the read request to the disk controller 5035 .
- the disk controller 5035 that has received the read request through the switch unit 5034 reads out specified data from the hard disk 5036 according to the received read request, and then transmits the read data to the protocol converter 5032 through the switch unit 5034 .
- the protocol converter 5032 which has received the read data executes the address translation program 126 using the port processor 5033 , and thereby changes a source of the read data to a SAN address of the storage device 104 a . Then, the protocol converter 5032 transmits the changed frame containing the read data to the host 105 through the port 5031 . The host 105 receives the read data as if it were transmitted from the storage device 104 a.
- the read request for the original data stored in the storage device 104 a which has been issued from the host 105 , is handled by the copy management switch 503 using the copy data stored in the storage area that is provided by the copy management switch 503 .
- the storage area provided by the copy management switch 503 is used to hold the copy data.
- the copy management switch 503 in this embodiment can be used in the same manner as the copy management switch 103 described in the above-mentioned embodiments, i.e., from the first to fourth embodiments.
- a sixth embodiment of the present invention will be described below.
- the whole original data or a specified part of the original data is copied.
- data is copied concurrently with the settings of the copy management tables 131 , 231 .
- a copy management switch 703 described below copies data, which is a target of the read request, from the storage device 104 a storing original data to the storage device 104 b specified. The operation performed in this manner enables efficient use of the storage capacity possessed by the storage device 104 b.
- a configuration of a computer system in the sixth embodiment is the same as that in the first embodiment except that the copy management switch 103 is replaced by the copy management switch 703 .
- this embodiment is different from the first embodiment in information and the like stored in memories 1122 , 1142 of the copy management switch 703 in a manner described below.
- the CPU 1121 executes the initialization program 721 to create a cache table 741 corresponding to a cache index 737 of each copy management entry 732 of a copy management table 731 described later.
- the cache table 741 will be described later.
- the CPU 1121 executes a management-terminal-submitted request processing program 722 (the management-terminal-submitted request processing program 122 used for the copy management switch 703 ), and in response to an addition or deletion of the cache index 737 resulting from the change of contents of the copy management table 731 , the cache table 741 is added or deleted.
- the CPU 1141 executes the address translation program 726 (the address translation program 126 used for the copy management switch 703 ).
- the CPU judges address information included in a read request, and then instructs the controller 112 to execute a cache processing program 727 .
- the CPU 1141 translates the address information about read data and the like. Details in the series of processing will be described later.
- the CPU 1121 executes the cache processing program 727 , and then, by use of the cache table 741 , makes a judgment as to whether or not there exists copy data corresponding to the read request for the original data. If the copy data exists, then the CPU 1121 issues a read request to the storage device 104 storing the copy data (in this case, the storage device 104 b ). On the other hand, if the copy data does not exists, the CPU 1121 copies the original data, which is specified by the read request, from the storage device 104 a to the storage device 104 b , and then transmits the copy data to the host 105 . Details in the series of processing will be described later.
- FIG. 16 is a diagram illustrating the configuration of the copy management table 731 .
- the copy management table 731 has a plurality of copy management entries 732 .
- each copy management entry 732 uses a SAN address and a LUN to manage the association between a stored location of original data and a location to which data is copied.
- each copy management entry 732 comprises an entry 733 for storing an original SAN address, an entry 734 for storing an original LUN, an entry 735 for storing a copy SAN address, an entry 736 for storing a copy LUN, and an entry 737 for storing cache index information.
- the copy management entry 732 specifies: the storage device 104 a that stores original data; a LUN of the original data; the storage device 104 b used to store copy data; and a LUN prepared for the copy data.
- cache index is information used to specify the cache table 741 described below.
- FIG. 17 is a diagram illustrating an example of how the cache table 741 is configured.
- the number of the cache tables 741 is equivalent to the number of the copy management entries 732 used in the copy management table 731 .
- the cache table 741 is associated with the copy management table 731 on the basis of a cache index stored in the entry 737 .
- Each cache table 741 has a plurality of cache entries 742 .
- the number of the cache entries 742 in each cache table 741 is determined by the capacity of the memory 1122 , and the like, when the system is designed.
- the cache entry 742 comprises an entry 743 for storing an original LBA, an entry 744 for storing a copy LBA, an entry 745 for storing an original length, an entry 746 for storing a non-access counter, and an entry 747 for storing a validity flag.
- the non-access counter is a counter used in the cache processing program 727 . How the non-access counter is used will be described later.
- the validity flag indicates that the cache entry 742 to which the validity flag belongs is valid or invalid. For instance, if the validity flag is 1, it indicates that the cache entry 742 is valid; and if the validity flag is 0, it indicates that the cache entry 742 is invalid.
- the valid cache entry 742 (that is to say, 1 is stored in the entry 747 ) indicates that a storage area of the storage device 104 a , starting from the original LBA and having a length of the original length, is copied to a storage area of the storage device 104 b , starting from the copy LBA.
- FIG. 18 is a flowchart illustrating a series of processing executed in the address translation unit 114 according to this embodiment. Incidentally, the series of processing is performed when the CPU 1141 executes the address translation program 726 .
- the CPU 1141 starts execution of the address translation program 726 (step 751 ).
- the CPU 1141 judges whether or not the received frame is intended for a read request (step 752 ). If the received frame is intended for a read request, the CPU 1141 judges whether or not data requested by the read request is data stored in a LUN of the storage device 104 as a target to be copied (more specifically, original data). To be more specific, the CPU 1141 searches for the copy management entry 732 in which a destination SAN address, and a LUN, of the read request match values stored in the entries 733 , 734 (step 753 ).
- step 753 the CPU 1141 instructs the controller 112 to execute the cache processing program 727 .
- step 754 the CPU 1141 transmits to the controller 112 the received frame, and information stored in the copy management entry 732 that has been selected in step 753 (step 754 ). The instant the processing of step 754 ends, the CPU 1141 ends the series of processing.
- the CPU 1141 judges whether or not the frame is intended for read data or a response (step 755 ). If the frame is intended for read data or a response, the CPU 1141 judges whether or not the received frame is transmitted from the storage device 104 (in this case, 104 b ) that stores copy data. To be more specific, the CPU 1141 judges whether or not the copy management table 731 has the copy management entry 732 whose field 735 value stored in the field agrees with the frame source (step 756 ).
- the CPU 1141 uses the copy management entry 732 , which has been found in step 756 , to change a source of the frame to an original SAN address stored in the entry 733 (step 757 ).
- step 753 If a destination of the frame as the read request does not exist in the copy management table 731 in step 753 , if it is judged in step 755 that the frame is neither read data nor a response, or if a source of the frame does not exist in the copy management table 731 in step 756 , the CPU 1141 transmits the frame completing the processing to the switch processing unit 115 before ending the series of processing (step 758 ).
- FIG. 19 is a diagram illustrating a series of processing executed by the controller 112 according to an instruction from the address translation unit 114 in step 754 .
- the series of processing proceeds when the controller 112 executes the cache processing program 727 .
- the controller 112 Upon receipt of instruction from the address translation unit 114 , the controller 112 judges whether or not a storage area of the storage device 104 a storing original data specified by a LBA, and a length, of the read request, which is contents of the frame received from the address translation unit 114 , is stored in the storage device 104 b in which copy data is stored.
- the CPU 1121 judges whether or not an area specified by a LBA, and a length, of the read request is included in an area specified by the entries 743 , 745 , and at the same time judges whether or not the cache entry 742 whose validity flag is 1 exists (step 761 ).
- the CPU 1121 updates a non-access counter 746 of a valid cache entry 742 included in the cache table 741 used in step 761 . More specifically, the CPU 1121 sets a value of the entry 746 in the cache entry 742 satisfying the condition of step 761 at 0, and then increments by one a value stored in the entry 746 in each of all other valid cache entries 742 (step 762 ).
- the CPU 1121 After that, the CPU 1121 generates a read request for copy data. To be more specific, the CPU 1121 changes values stored in the entries 735 , 736 of the copy management entry 732 , transmitted from the address translation unit 114 , to a destination SAN address and a LUN respectively. Moreover, the CPU 1121 uses information stored in the cache entry 742 found in step 761 to change the LBA to a value determined by (a LBA specified by the read request+a value stored in the entry 744 ⁇ a value store in the entry 743 ), and also to change the length to a value stored in the entry 745 (step 763 ). The CPU 1121 then transmits the processed frame to the switch processing unit 115 (step 764 ).
- the CPU 1121 judges whether or not a vacant area enough to store data having a length specified by the read request exists in a storage area of the storage device 104 b for storing copy data specified by the copy management entry 732 . Further, the CPU 1121 judges whether or not the cache table 741 includes the cache entry 742 that is not used. More specifically, by use of information about all valid cache entries 742 of the cache table 741 , the CPU 1121 checks storage areas currently used in the storage device 104 b , and thereby finds out from the storage device 104 b a free storage area having a length greater than or equal to the length value included in the read request (step 765 ).
- the CPU 1121 deletes or invalidates one of the valid cache entries 742 in the cache table 741 to extend a free storage area.
- the CPU 1121 finds the cache entry 742 , the entry 746 of which has the largest value among those stored in the entries 746 of the valid cache entries 742 , and then sets the value of the entry 747 of the found cache entry 742 at 0 (step 766 ). After that, the CPU 1121 repeats the processing of step 765 .
- the CPU 1121 updates the found cache entry 742 , and then stores the association of original data specified by the read request with the found storage area of the storage device 104 b . Accordingly, first of all, the original data is copied. To begin with, if an appropriate storage area is found and at least one invalid cache entries 742 is found, then the CPU 1121 reads out original data, which is specified by the read request, from the storage device 104 a that holds the original data.
- the CPU 1121 creates a read request for the original data, in which a source is a SAN address of the controller 112 , and then transmits the read request to the switch processing unit 115 . Subsequently, the CPU 1121 stores in the memory 1122 the read data that has been transmitted from the storage device holding the original data (step 767 ).
- the CPU 1121 transmits the original data stored in the memory 1122 to the storage area of the storage device 104 b which has been found in step 765 (step 768 ).
- the CPU 1121 then updates the cache table 741 .
- the CPU 1121 stores in the entry 744 a copy LBA corresponding to the storage area that stores data transmitted from the memory 1122 , and also stores in the respective entries 743 , 745 an original LBA and an original length corresponding to the storage area of the original data.
- the CPU 1121 sets a value of the entry 746 in the cache entry 742 at 0, and sets a value of the entry 747 at 1 (step 769 ). After that, the CPU 1121 executes processing of step 763 and beyond.
- the user or administrator of the system uses the management terminal 106 connected to the copy management switch 703 to make settings in the copy management table 731 so that the original data stored in a storage area specified by a LUN in the storage device 104 a is copied to a storage area specified by a LUN in the storage device 104 b .
- data is in a state of not yet having been copied to the storage device 104 b .
- all the cache entries 742 of the cache table 741 corresponding to the cache index 737 of the copy management entry 732 are invalid, which represents an initial state.
- a frame containing the read request arrives at the copy management switch 703 .
- the address translation unit 114 On detecting the receipt of the frame, the address translation unit 114 starts the address translation program 726 .
- the address translation unit 114 judges that the read request is a request for the original data, and therefore instructs the controller 112 to execute the cache processing program 727 .
- the instructed controller 112 searches for the cache entry 742 corresponding to the data specified by the read request. At this point of time, because the data is not yet be transmitted to the storage device 104 b , the corresponding cache entry 742 is not found. For this reason, the controller 112 issues a read request to the storage device 104 a , and then transfers its read data to the storage device 104 b . In addition, the controller 112 stores in the cache table 741 the cache entry 742 corresponding to the data transferred to the storage device 104 b.
- the controller 112 generates, from the received frame containing the read request, a frame containing a read request for the copy data whose source is the host 105 . Then, the controller 112 transmits the generated frame to the switch processing unit 115 .
- the switch processing unit 115 transmits the frame to the storage device 104 b .
- the storage device 104 b which has received the frame, then transmits a frame of read data whose destination is the host 105 to the host 105 via the copy management switch 703 .
- the address translation unit 114 which has received the frame of read data, judges the read data of the received frame to be copy data.
- the address translation unit 114 changes a source of the read data to the storage device 104 a , and then transfers the read data to the switch processing unit 115 .
- the switch processing unit 115 transfers the read data to the host 105 .
- the read data arrives at the host 105 as data transmitted from the storage device 104 a.
- the controller 112 receives a frame containing the read request, and thereby can find a cache entry 742 corresponding to the original data. In this case, the controller 112 generates a read request for copy data whose source is the host 105 according to the frame of the read request, and then transmits the frame to the switch processing unit 115 .
- the second read request is handled by use of only the copy data stored in the storage device 104 b .
- only the original data that is actually accessed by the host is copied.
- the non-access counter 746 is used to delete, from the cache entry 742 , copy data that has not been accessed for the longest time. In other words, only data whose frequency of accesses from the host is high is held in the storage device 104 b .
- this embodiment enables efficient use of the storage capacity of the storage device 104 b.
- the method used to delete the cache entry 742 is a method that is in general called LRU (Least Recently Used) However, another method may also be used.
- LRU Least Recently Used
- the copy management device reads out from the storage device 104 a original data specified by a read request issued from the host 105 , and then stores the read data in the memory 1122 before transferring the read data to the storage device 104 b for storing copy data.
- the original data specified by the read request issued from the host 105 but also data before and behind the original data may also be read out from the storage device 104 a.
- data may be read out from a storage area that starts from the same starting location as that of the specified storage area and is longer than the specified length. Reading out data in this manner increases a possibility that if the host 105 issues read requests for consecutive areas, data requested by the next read request will be found in the storage device 104 b . Further, the whole storage area specified by a SAN address and a LUN may also be read.
- a data read request for the storage device 104 a may also be replaced with a data copy request for the storage device 104 a to the storage device 104 b .
- the controller 112 may also be provided with a dedicated buffer memory used to transfer original data to the storage device 104 b for storing copy data.
- read data may also be transmitted to the host 105 concurrently with transferring the read data to the storage device 104 b that holds copy.
- the amount of data flowing through a SAN can be reduced, a load on the SAN can be reduced.
Abstract
A copy management switch is placed in a network that connects a storage device with a computer. This copy management switch is connected to the computer, a first storage device, and a second storage device over the network. The copy management switch copies beforehand data stored in the first storage device to the second storage device. On receiving a read request, which is issued from the computer to the first storage device, the copy management switch converts the read request to the first storage device into a read request to the second storage device, and then transmits the converted read request to the second storage device. The second storage device transfers, to the copy management switch, data corresponding to the read request. The copy management switch transfers the data to the computer as data transferred from the first storage device.
Description
- The present invention relates to switches placed in a network that connects a storage device with a computer.
- With the increasing storage capacity of storage devices used in companies and the like, a system in which a connection between storage devices, or between storage devices and computers, is made via a network such as a fiber channel is becoming popular. The network providing a connection between the storage devices or between others, or the total system in which a connection is made via the network, is called a storage area network (hereinafter referred to as SAN).
- On the other hand, examples of technologies that reduce the frequency of data transfer in a network so as to shorten time required for causing a computer to access data stored in a storage device include a network cache technology. To be more specific, a storage area (hereinafter referred to as a “cache device”) for temporarily storing data on a network is first prepared, and subsequently, if through the cache device the computer reads out data stored in the storage device, the read data is stored in the cache device, which then returns a response when the data is accessed thereafter. As a result, the access time for the data is shortened.
- An example in which the network cache technology is employed in SAN is disclosed in Japanese Patent Laid-open No. 2002-132455 (patent document 1). To be more specific, after a computer called a metadata server is provided in SAN, a computer first accesses the metadata server when accessing a storage device. The metadata server notifies the computer of a location of data to be accessed. If a cache is used, the computer is notified of a location of a device having the cache.
- In addition, examples of technologies for placing a device having a cache on a network such as the Internet or WWW include a technology called transparent cache. In this technology, when a switch receives an access request for data, which has been issued from a computer to a storage device, the switch transmits the access request to a computer having a cache (hereinafter referred to as a “cache server”) as a first step. If a target file of the access request exists in the cache possessed by the cache server (hereinafter referred to as a “cache hit”), the cache server transmits the target file to the computer that has issued the access request. On the other hand, if the target data does not exist in the cache (hereinafter referred to as a “cache miss”), the cache server transmits an access request to the storage device to obtain the data, and then transmits the obtained data to the computer that has issued the access request.
- As described above, if the network cache technology is employed in SAN, when a cache hit is encountered, the time taken for obtaining the requested data is shortened.
- However, as far as the technology disclosed in
patent document 1 is concerned, although it is possible to install a cache device in SAN, a metadata server is required to access data, and the settings and operation of the computer need to be changed. More specifically, a protocol used for accessing data (for instance, the SCSI protocol), which is conventionally used in SAN, needs to be changed to a dedicated protocol that uses the metadata server. - In the meantime, as is the case with the transparent cache, if a network cache technology for handling data on a file basis is used, it is difficult to speed up a response because processing for judging a cache hit is complicated.
- To be more specific, if cache processing is performed on a file basis, a read request is equivalent to an access request specifying a file name, etc. On receiving the access request, the cache server first reads out a file held in its own storage device, together with data called metadata which stores the association of the file with a corresponding block in the storage device, and then searches the read data for the file specified by the read request to judge whether or not a cache hit is encountered. Because this search processing judges the coincidence of a name or the like, processing of the block access protocol such as SCSI becomes more complicated than the comparison between numerical values of logical block addresses used to specify locations to be accessed.
- An object of the present invention is to speed up an access to data without changing settings of a computer in SAN so that a network bandwidth can be saved.
- In order to achieve the above-mentioned object, a network system according to the present invention has the undermentioned configuration. To be more specific, according to one aspect of the present invention, there is provided a network system comprising: a computer; a switch that is connected to the computer; a first storage device that is connected to the switch via a network; and a second storage device that is connected to the switch via the network.
- In this network system, the switch transfers data stored in the first storage device to the second storage device according to an instruction from outside. Then, on receiving from the computer an access request for the data stored in the first storage device, the switch converts the access request into an access request to the second storage device, and then transmits the converted access request to the second storage device. Next, after receiving data from the second storage device, the switch converts the received data into such data that can be recognized as data transmitted from the first storage device, and then transmits the converted data to the computer.
- It is to be noted that a second computer connected to the switch may also give an instruction to the switch. Additionally, the switch may also provide the computer with a virtual storage corresponding to the first storage device. In this case, the computer issues an access request to the virtual storage.
- Moreover, according to another aspect of the present invention, the above-mentioned switch and the second storage device may also be integrated into one device.
- According to still another aspect of the present invention, instead of transferring beforehand data stored in the first storage device to the second storage device, the switch may also transfer the data stored in the first storage device to the second storage device in response to an access request from the computer. Further, in this case, the switch may have information about whether or not the data stored in the first storage device has been transferred to the second storage device, and transmit an access request to the first storage device or the second storage device according to the information. Furthermore, in this aspect, when the switch transfers data from the first storage device to the second storage device, the switch checks the amount of free storage capacity of the second storage device. If the amount of free storage capacity provided by the second storage device is not enough to store the data to be transferred, the switch deletes some amount of data stored in the second storage device according to a predetermined criterion, e.g., according to the frequency of use by the computer, so as to transfer the data to an area where the data deletion is performed.
- It is to be noted that instead of the switch, the first storage device or the second storage device may also control the transmission of data.
-
FIG. 1 is a diagram illustrating a configuration of a computer system according to a first embodiment of the present invention; -
FIG. 2 is a diagram illustrating a configuration of a copy management switch according to the first embodiment; -
FIG. 3 is a diagram illustrating a memory configuration of a copy management switch according to the first embodiment; -
FIG. 4 is a diagram illustrating a configuration of a copy management table according to the first embodiment; -
FIG. 5 is a flowchart illustrating address translation processing according to the first embodiment; -
FIG. 6 is a diagram illustrating a configuration of a computer system according to a second embodiment of the present invention; -
FIG. 7 is a diagram illustrating a configuration of a proxy address table according to the second embodiment; -
FIG. 8 is a flowchart illustrating address translation processing according to the second embodiment; -
FIG. 9 is a diagram illustrating a configuration of a computer system according to a third embodiment of the present invention; -
FIG. 10 is a diagram illustrating a configuration of a virtual address table according to the third embodiment; -
FIG. 11 is a diagram illustrating an example of a copy management table according to the third embodiment; -
FIG. 12 is a diagram illustrating a configuration example in which a plurality of copy management switches are provided in the third embodiment; -
FIG. 13 is a diagram illustrating a configuration of a computer system according to a fourth embodiment of the present invention; -
FIG. 14 is a diagram illustrating a configuration of a computer system according to a fifth embodiment of the present invention; -
FIG. 15 is a diagram illustrating a configuration example of a copy management switch; -
FIG. 16 is a diagram illustrating a configuration of a copy management table according to a sixth embodiment; -
FIG. 17 is a diagram illustrating a configuration of a cache table according to the sixth embodiment; -
FIG. 18 is a flowchart illustrating address translation processing according to the sixth embodiment; and -
FIG. 19 is a flowchart illustrating cache processing according to the sixth embodiment. -
FIG. 1 is a diagram illustrating a first embodiment of a computer system to which the present invention is applied. The computer system comprises a SAN 101, ahost 105, astorage device 104 a, and astorage device 104 b. Thehost 105, thestorage device 104 a, and thestorage device 104 b are interconnected over the SAN 101. The SAN 101 comprises thehost 105,switches copy management switch 103 described later. - This embodiment, which will be described below, is based on the assumption that the
host 105 makes a read request for data (hereinafter referred to as “original data”) stored in thestorage device 104 a. - The
host 105 is a computer comprising aCPU 1051, amemory 1052, and aninterface 1053 used to make a connection to the SAN 101. - The storage device 104 comprises the following: a medium 1043 for storing data; an
interface 1044 used to make a connection to theSAN 101; aCPU 1041 for executing a program used to respond to a request from thehost 105; and amemory 1042. Incidentally, a variety of examples could be conceivable as the medium 1043 included in the storage device 104. For example, a disk array constituted of a plurality of hard disk drives may also be adopted as the medium 1043. - In addition, on receiving a data read request from the
host 105, the storage device 104 transmits to thehost 105 data corresponding to the request, and then transmits a response notifying that the transmission is completed. - The
switches copy management switch 103 mutually exchange connection information to create a routing table required for routing processing described below. To be more specific, they exchange information indicating a load (overhead) of communication between arbitrary two switches (hereinafter referred to as a “connection cost”). In general, the connection cost becomes larger with decrease in communication bandwidth of a communication line between the arbitrary two switches. However, an administrator or the like can also set a connection cost at a given value through amanagement terminal 106 described below. Each of the switches calculates, from all connection costs obtained, the sum of the connection costs for a path leading to each of the other switches, and thereby finds a path for which the sum of the connection costs is the lowest. The path is then stored in the routing table. - A SAN domain address unique to each switch is assigned to each of the
switches copy management switch 103. Additionally, a unique SAN address is assigned to each of thestorage devices host 105. Here, the SAN address is an address constituted of: a SAN domain address of a switch in theSAN 101, which is connected to a device (hereinafter referred to also as a “node”) such as a computer connected to theSAN 101; and a SAN node address unique to a group (hereinafter referred to as a “domain”) specified by the SAN domain address. - When transmitting/receiving a frame to/from another node, each node specifies a source node and a destination node by adding a source SAN address and a destination SAN address to the frame. Each of the
switches copy management switch 103 searches the routing table for a destination SAN domain address of a frame to route the frame. In addition, if the destination SAN domain address of the frame agrees with a SAN domain address of the switch 102, the frame is transferred to a node directly connecting to the switch 102, which has a SAN node address that agrees with a destination SAN node address of the frame. It should be noted that a frame is a unit of data or an access request transmitted through a protocol used for theSAN 101. - This embodiment is based on the assumption that because of a low connection cost, a short distance, performance of the
storage device 104 b higher than that of thestorage device 104 a, or the like, thehost 105 can access thestorage device 104 b at higher speed as compared with a case where the host accesses thestorage device 104 a. - The
management terminal 106 is connected to thecopy management switch 103. The user or administrator of the computer system according to this embodiment instructs thecopy management switch 103 to copy original data to thestorage device 104 b by use of themanagement terminal 106. At this time, the administrator enters, through themanagement terminal 106, information indicating the association of a location of the original data with a location of the copied data (hereinafter referred to as “copy data”) in thecopy management switch 103, more specifically, in a copy management table 131 described below. In this connection, the administrator or the like may also instruct, through themanagement terminal 106, thecopy management switch 103 to collect information about the frequency of accesses from thehost 105 to thestorage device 104 a, and the like, and then to copy to thestorage device 104 b only areas, for which the access frequency is high, instead of the whole original data. - Further, the
switches copy management switch 103 may also be adapted to perform name management. For example, a fiber channel switch has a table used to search for a unique SAN address of a port in theSAN 101 by use of a world wide name (hereinafter referred to as WWN) which is a unique and unchangeable value in the world and which is assigned to a node or a port. The WWN includes a WWPN (World Wide Port Name) that is unique to a port connected to theSAN 101, and a WWNN (World Wide Node Name) that is unique to a node. A node having a plurality of connection ports are allowed to have one WWNN and a plurality of WWPNs. In addition, also in the standard called iSCSI that makes the SCSI protocol usable on an IP network, there is a name management system called iSNS. -
FIG. 2 is a diagram illustrating a configuration of thecopy management switch 103. Thecopy management switch 103 comprises the following: a port 111 used to connect another node; acontroller 112; amanagement port 113 used to connect themanagement terminal 106; an address translation unit 114; and aswitch processing unit 115 that performs routing and switching. Theswitch processing unit 115 holds the routing table required for routing. - The
controller 112 comprises aCPU 1121, amemory 1122, and anon-volatile storage 1123. The address translation unit 114 comprises aCPU 1141 and amemory 1142. It is to be noted that because the configuration disclosed in this figure is merely a preferred embodiment, another configuration may also be applied so long as it can achieve equivalent functions. -
FIG. 3 is a diagram illustrating programs and data that are stored in thememories non-volatile storage 1123 of thecopy management switch 103. - The
initialization program 121 is a program that is executed by theCPU 1121 upon start-up of thecopy management switch 103. By executing theinitialization program 121, theCPU 1121 reads each of the other programs from thenon-volatile storage 1123 into thememory 1122 and thememory 1142, and also reads the copy management table 131 described below into thememory 1142 possessed by each address translation unit 114. - A management-terminal-submitted
request processing program 122, a routingprotocol processing program 123, and a nameservice processing program 124 are stored in thememory 1122 of thecontroller 112. TheCPU 1121 executes these programs. Anaddress translation program 126 is stored in thememory 1142 of the address translation unit 114, and is executed by theCPU 1141. - By executing the management-terminal-submitted
request processing program 122, theCPU 1121 changes contents of the copy management table 131 according to a request submitted from themanagement terminal 106, which is received through themanagement port 113. Additionally, by executing the management-terminal-submittedrequest processing program 122, theCPU 1121 executes copy of data according to the request from themanagement terminal 106. - On the other hand, by executing the management-terminal-submitted
request processing program 122, theCPU 1121 can perform management applied to general switches. Incidentally, examples of protocols used for themanagement port 113 include TCP/IP. However, another protocol may also be used so long as it is possible to communicate with themanagement terminal 106 by the protocol. - Further, by executing the routing
protocol processing program 123, theCPU 1121 exchanges information about connections in the SAN 101 (hereinafter referred to as “connection information”) with another switch 102 to create a routing table, and then stores the created routing table in the memory possessed by theswitch processing unit 115. - By executing the name
service processing program 124, theCPU 1121 writes to thename database 125 information about nodes connected to thecopy management switch 103, whereas theCPU 1121 responds to a search request from the node. For the purpose of receiving from the node the request for searching thename database 125, a SAN address is allocated to thecontroller 112. - During the execution of the
address translation program 126 by theCPU 1141 of the address translation unit 114, as soon as the port 111 receives a frame, theCPU 1141 translates destination and source SAN addresses of a read request, read data, or the like, according to information stored in the copy management table 131. Details in the address translation processing will be described later. It is to be noted that although the address translation processing is executed on the basis of the program in this embodiment, dedicated hardware may also perform the address translation processing. -
FIG. 4 is a diagram illustrating how the copy management table 131 is configured. The copy management table 131 has a plurality ofcopy management entries 132. Each of thecopy management entries 132 holds information about the association of original data with copy data. Thecopy management entry 132 comprises the following: afield 133 for storing an original SAN address indicating the storage device 104 that stores the original data; afield 134 for storing a number indicating a logical unit in the storage device 104 that stores the original data (hereinafter referred to as a “original LUN”); afield 135 for storing a logical block address indicating a starting location of the original data in the logical unit (hereinafter referred to as a “original LBA”); afield 136 for storing an original length indicating a size of the original data; afield 137 for storing a copy SAN address indicating the storage device 104 that stores the copy data corresponding to the original data stored in thecopy management entry 132; afield 138 for storing a number indicating a logical unit in the storage device 104 that stores the copy data (hereinafter referred to as “copy LUN”); and afield 139 for storing a logical block address indicating a starting location of the copy data in the logical unit (hereinafter referred to as “copy LBA”). Incidentally, because a data length of the copy data is the same as that of the original data, a field for storing a copy length is not necessary. - Next, operation of each device according to the present invention will be outlined below.
- In this embodiment, according to an instruction from an administrator or the like, the
copy management switch 103 first transfers original data stored in thestorage device 104 a to thestorage device 104 b to create copy data therein. In this case, information indicating the association of thestorage device 104 a storing the original data with thestorage device 104 b storing the copy data is written to the copy management table 131. - After completing the above-mentioned copy processing, on receiving a read request from the
host 105, thecopy management switch 103 judges whether or not the information indicating the association of the original data with the copy data includes the address information held in a frame which includes the read request, and thereby determines whether or not data corresponding to the read request is the original data, and whether or not there exists the copy data corresponding to the original data. - If there is the copy data corresponding to the original data as a target of the read request, the
copy management switch 103 converts the read request for the original data stored in thestorage device 104 a, which has been received from thehost 105, to a read request to thestorage device 104 b that stores the copy data. To be more specific, thecopy management switch 103 changes a SAN address indicating a request destination (storage device 104 a), which is included in the read request, to a SAN address of thestorage device 104 b that stores the copy data. This enables effective use of the network. - How to create copy data will be described below. The user or administrator of the system uses the
management terminal 106 to transmit the following information to the copy management switch 103: a SAN address of thestorage device 104 a; a logical unit number (hereinafter referred to as “LUN”) that is an address of original data in thestorage device 104 a; a logical block address (hereinafter referred to as “LBA”) a length of the original data; a SAN address of thestorage device 104 b; and a data copy destination's LUN and LBA. - The
controller 112 which has received the information transmits the read request for the original data to thestorage device 104 a. - Next, the
controller 112 stores in thememory 1122 the read data that has been transmitted from thestorage device 104 a. Subsequently, thecontroller 112 transmits a write request to thestorage device 104 b, and thereby writes the original data stored in thememory 1122 to thestorage device 104 b. The data is copied through the above-mentioned processing. - Incidentally, the copy operation described above is merely an example of the copy processing. Therefore, how to copy the data is not limited to this method. For example, the
controller 112 may also be provided with a dedicated buffer for storing read data instead of thememory 1122. In addition, thestorage device 104 a itself may also perform the copy processing. For example, the SCSI protocol standard contains a command of EXTENDED COPY. - On receiving the EXTENDED COPY command, the storage device that can handle this command copies a specific area in the storage device to a specific area in another storage device according to what the command specifies.
- Thus, if the
storage device 104 a holding the original data can handle the EXTENDED COPY command, it is possible to perform the copy processing also in the manner described below. - The
controller 112 transmits the EXTENDED COPY command for copy processing to thestorage device 104 a that holds the original data. Thestorage device 104 a transfers to thestorage device 104 b contents of a storage area specified by the EXTENDED COPY command. After the transfer of the data stored in the specified storage area ends, thestorage device 104 a transmits a response indicating the end of the processing to a source of the EXTENDED COPY command (in this case, to the copy management switch 103) before the copy processing is completed. - Next, operation of the
copy management switch 103 after the completion of the copy processing will be described. - As described above, the
copy management switch 103 which has received the frame from thehost 105 translates a source address, or a destination address, of the frame by the address translation unit 114, and then transmits the frame to an appropriate device. -
FIG. 5 is a flowchart illustrating an example of how the address translation unit 114 executes address translation processing. - The instant the port 111 receives a frame, the
CPU 1141 starts execution of the address translation program 126 (step 151). Then, theCPU 1141 judges whether or not the frame received by the port 111 is a frame containing the read request to the storage device 104 which has been issued by the host 105 (step 152). - If the frame contains the read request, the
CPU 1141 judges whether or not a copy of data requested by the read request exists in the computer system. To be more specific, theCPU 1141 judges whether or not the copy management table 131 has acopy management entry 132 satisfying a condition that an area indicated by the information stored in thefield 133, thefield 134, thefield 135, and thefield 136, which are included in thecopy management entry 132, includes an area indicated by a destination SAN address, a LUN, a LBA, and a length which are stored in the frame containing the read request (step 153). - If the
copy management entry 132 satisfying the condition ofstep 153 exists, theCPU 1141 converts, by use of thecopy management entry 132, the frame containing the read request into a frame containing a read request for the storage device 104 in which the copy data is stored (step 154). To be more specific, theCPU 1141 changes a destination of the frame containing the read request to the value stored in thefield 137, changes the LUN to the value stored in thefield 138, and changes the LBA to a value determined by (LBA+the value stored in thefield 139−a value stored in the field 135). - If it is judged that the frame is not intended for a read request in
step 152, then theCPU 1141 judges whether the contents of the frame are intended for data (hereinafter referred to as “read data” that is transmitted from the storage device 104 according to the read request or a response (step 155). If the frame is intended for read data or a response, theCPU 1141 judges whether or not the read data or the response is transmitted from thestorage device 104 b that stores the copy data. To be more specific, theCPU 1141 judges whether or not the copy management table 131 has thecopy management entry 132, a frame source of which agrees with a value stored in the field 137 (step 156). - If the frame is transmitted from the
storage device 104 b that stores the copy data, theCPU 1141 refers to thecopy management entry 132 that has been found instep 156 to change the frame source to an original SAN address stored in the field 133 (step 157). - After the processing described in
steps CPU 1141 transmits the frame completing the processing to theswitch processing unit 115. In this connection, if there exists no appropriatecopy management entry 132 instep step 155 that the frame is neither read data nor a response, theCPU 1141 transmits to theswitch processing unit 115 the received frame just as it is (step 158). - A series of operation of the computer system according to this embodiment will be summarized as below.
- The user or administrator of the system uses the
management terminal 106 to instruct thecopy management switch 103 to perform data copy and prepare the copy management table 131. -
FIG. 4 illustrates an example as follows: a SAN domain address of theswitch 102 c is 4; a SAN node address of thestorage device 104 a is 01; a SAN domain address of thecopy management switch 103 is 5; a SAN node address of thestorage device 104 b is 02; original data having a length of 100000 is stored inLUN 0 of thestorage device 104 a starting fromLBA 0; and the whole original data is copied to an area starting fromLBA 50000 ofLUN 5 of thestorage device 104 b. In addition, information indicating the association of the original data with the copy data is stored in each field of thecopy management entry 132. - The
host 105 issues a read request to thestorage device 104 a. A frame corresponding to the read request includes the SAN address, LUN, LBA, and length of thestorage device 104 a as a destination, and the SAN address of thehost 105 as a source. - The read request is routed according to the SAN address of the destination, and consequently arrives at the
copy management switch 103 via theswitch 102 a. Thecopy management switch 103 which has received the read request checks contents of the frame against the information in the copy management table 131. If there exists copy data corresponding to the read request, thecopy management switch 103 converts the read request into a read request for the copy data, and then routes the converted frame to thestorage device 104 b. - The
storage device 104 b that has received the read request reads the copy data as a target of the read request, and then transmits the read data to thehost 105 as a source of the read request. A frame of the read data includes a SAN address of thehost 105 as a destination, and a SAN address of thestorage device 104 b as a source. - The
copy management switch 103 which has received the read data changes the source of the read data to thestorage device 104 a according to the information of the copy management table 131. After that, the read data is routed according to the SAN address of the destination, and consequently arrives at thehost 105 via theswitch 102 a. Thehost 105 receives the read data as if it were transmitted from thestorage device 104 a. - As a result of the series of operation, the read request to the
storage device 104 a is actually handled by thestorage device 104 b capable of accessing at higher speed in theSAN 101. Accordingly, response speed becomes higher, making it possible to reduce loads on theswitches storage device 104 a. -
FIG. 6 is a diagram illustrating a second embodiment of a computer system to which the present invention is applied. A point of difference between the first and second embodiments is that a plurality of the copy management switches 203 are provided in the second embodiment. It is to be noted that because other configurations are similar to those in the first embodiment, detailed description thereof will be omitted. - Additionally, this embodiment is based on the assumptions that the shortest route from the
host 105 to thestorage device 104 a is thehost 105→theswitch 102 a thecopy management switch 203 a→theswitch 102 b→thestorage device 104 a, and that the shortest route from thehost 105 to thestorage device 104 b is thehost 105→theswitch 102 a→thecopy management switch 203 b→thestorage device 104 b. Moreover, this embodiment is also based on the assumption that a connection cost from thehost 105 to thestorage device 104 a is higher than that from thehost 105 to thestorage device 104 b. - The administrator or user of the system uses the
management terminal 106, which is connected to the copy management switches 203 a, 203 b, to instruct thecopy management switch 203 b to copy original data held in thestorage device 104 a to thestorage device 104 b, and then writes information indicating the association of the original data with copy data to the copy management table 231 possessed by the copy management switches 203 a, 203 b. - In addition, a proxy address table 241 as well as the programs described in the first embodiment is stored in the
memory 1122 possessed by the copy management switch 203. Moreover, contents of the address translation program 226 executed by theCPU 1141 also differ from those described in the first embodiment. - Further, as a substitute for the copy management table 131, the copy management table 231 is stored in the
non-volatile storage 1123.FIG. 11 is a diagram illustrating an example of the copy management table 231. In addition to the fields of thecopy management entry 132 in the first embodiment, the copy management entry 232 of the copy management table 231 has a field 240 for storing a local flag. - The local flag is a flag indicating the relationship of connection between the storage device 104 corresponding to the
original SAN address 133 of the copy management entry 232 possessed by the copy management switch 203 and each of the plurality of copy management switches 203 including this copy management switch 203. To be more specific, each value is set in accordance with the number of devices existing between each of the plurality of copy management switches 203 and the storage device 104. Hereinafter a state in which the number of devices is small is expressed as “near”. - In this embodiment, the
copy management switch 203 b is connectedly disposed at a position nearer to thestorage device 104 b than thecopy management switch 203 a. Therefore, themanagement terminal 106 stores in the copy management table 231 of thecopy management switch 203 a the copy management entry 232 in which a local flag is 0, and stores in the copy management table 231 of thecopy management switch 203 b the copy management entry 232 in which a local flag is 1. -
FIG. 7 is a diagram illustrating a configuration of the proxy address table 241. The proxy address table 241 is a table for storing the association among a SAN address that points to thehost 105 requesting data (hereinafter referred to as a “host SAN address”), an original SAN address, and a proxy address used for address translation processing in this embodiment (hereinafter referred to as a “proxy SAN address”). - The proxy address table 241 has a plurality of
proxy address entries 242. Each of theproxy address entries 242 comprises afield 243 for storing a host SAN address, afield 244 for storing an original SAN address, and afield 245 for storing a proxy SAN address. How to use the proxy address table 241 will be described later. -
FIG. 8 is a flowchart illustrating an example of address translation processing by theCPU 1141 according to this embodiment. Here, an address translation program executed in the copy management switch 203 is called an address translation program 226. - The instant the port 111 receives a frame, the
CPU 1141 starts execution of the address translation program 226 (step 251). TheCPU 1141 judges whether or not the received frame is intended for a read request (step 252). If the received frame is intended for a read request, theCPU 1141 judges whether or not a copy of data requested by the read request exists in the computer system. To be more specific, theCPU 1141 judges whether or not the copy management table 231 has a copy management entry 232 satisfying a condition that an area indicated by the information stored in thefield 133, thefield 134, thefield 135, and thefield 136, which are included in the copy management entry 232, includes an area indicated by a destination SAN address, a LUN, a LBA, and a length which are stored in the frame containing the read request (step 253). - If there exists the copy management entry 232 that satisfies the condition described in
step 253, theCPU 1141 uses the copy management entry 232, which has been found instep 253, to convert the read request into a read request to the storage device 104 that stores copy data. To be more specific, a destination of the frame containing the read request is changed to a copy SAN address stored in thefield 137; a LUN of the frame is changed to a copy LUN stored in thefield 138; and a LBA of the frame is changed to a value determined by (a LBA+a value of a copy LBA stored in thefield 139−a value of an original LBA stored in the field 135) (step 254). - If a destination of the frame is not included in the copy management table 231 in
step 253, or if the frame is converted into the read request to the storage device 104 that stores the copy data, then theCPU 1141 judges whether or not the read request is issued to the storage device 104 that is connected to the copy management switch 203 to which theCPU 1141 belongs. More specifically, to begin with, a judgment is made as to whether or not an area indicated by a SAN address, a LUN, a LBA, and a length, of a destination of the read request is included in an area indicated by thecopy SAN address 137, thecopy LUN 138, thecopy LBA 139, and theoriginal length 136 of a copy management entry 232 in the copy management table 231. A further judgment is then made as to whether or not the pertinent copy management entry 232 has a value of 1 in the local flag 240 (step 255). - If the copy management entry 232 in which the local flag is 1 exists, the
CPU 1141 changes a source of the frame containing the read request. The reason for the change is to differentiate between a read request for the copy data and a read request for data other than the copy data held by the storage device 104. To be more specific, theCPU 1141 first generates a proxy SAN address. The proxy SAN address is determined such that it includes a SAN domain address assigned to the copy management switch 203, and that it does not overlap SAN addresses of the other nodes and also does not overlap a proxy SAN address stored in thefield 245 of theproxy address entry 242 held in the proxy address table 241. - Next, the
CPU 1141 writes, to theproxy address entry 242 that is not used in the proxy address table 241, the association with a host SAN address corresponding to thehost 105 that has issued the read request, an original SAN address, and a proxy address. To be more specific, by use of the copy management entry 232 found instep 255, theCPU 1141 stores the host SAN address indicating thehost 105 as a source of the read request in thefield 243, stores theoriginal SAN address 133 in thefield 244, and stores the generated proxy SAN address in thefield 245. TheCPU 1141 then changes a source of the frame, which is the read request, to the generated proxy address (step 256). - If it is judged in
step 252 that the received frame is not intended for a read request, theCPU 1141 judges whether or not the received frame is intended for read data (step 257). If the received frame is intended for read data, theCPU 1141 judges whether or not a destination of the read data is a proxy SAN address generated by the copy management switch 203. More specifically, theCPU 1141 judges whether or not the proxy address table 241 includes theproxy address entry 242 in which a SAN address pointing to a destination of the read data agrees with a proxy SAN address stored in the field 245 (step 258). - If the
proxy address entry 242 satisfying the condition is found, theCPU 1141 uses information stored in theproxy address entry 242, which has been found instep 258, to change the source of the frame to an original SAN address stored in thefield 244, and also to change the destination to the host SAN address stored in the field 243 (step 259). - If it is judged in
step 257 that the received frame is not intended for read data, theCPU 1141 judges whether or not the received frame is intended for a response (step 260). If the frame is intended for a response, theCPU 1141 judges whether or not a destination of the frame is a node indicated by the proxy SAN address generated by the copy management switch 203. More specifically, theCPU 1141 judges whether or not the proxy address table 241 includes theproxy address entry 242 in which a SAN address pointing to a destination of the frame agrees with the proxy SAN address stored in the field 245 (step 261). - If the
proxy address entry 242 satisfying the condition is found instep 261, theCPU 1141 uses information stored in the foundproxy address entry 242 to change the source of the frame to an original SAN address stored in thefield 244, and also to change the destination to the host SAN address stored in thefield 243. In addition, theCPU 1141 deletes theproxy address entry 242 from the proxy address table 241 (step 262). - If the destination of the frame does not satisfy the condition shown in
step 255, if the destination of the frame is not judged to be the proxy SAN address instep step CPU 1141 transmits the frame completing the processing to the switch processing unit 115 (step 263). - A series of operation of the computer system according to this embodiment will be summarized as below.
- The user or administrator of the system uses the
management terminal 106 to instruct thecopy management switch 103 to perform data copy and store the information in the copy management table 231 of each copy management switch. - The
host 105 issues a read request to thestorage device 104 a. The read request includes a SAN address, a LUN, a LBA, a length, of thestorage device 104 a as a destination, and a SAN address of thehost 105 as a source. The read request is routed according to the SAN address of the destination, and thereby arrives at thecopy management switch 203 a via theswitch 102 a. - On receiving the read request, the
copy management switch 203 a checks the read request against information in the copy management table 231. Thecopy management switch 203 a converts a destination of the read request into a read request for copy data held in thestorage device 104 b, and then routes the converted read request. However, because thestorage device 104 b is not connected to thecopy management switch 203 a, a proxy address is not generated, nor is the source changed. - The modified read request is routed according to a SAN address of the destination, and consequently arrives at the
copy management switch 203 b. Thecopy management switch 203 b which has received the read request checks contents of the received frame against information in the copy management table 231. As a result, thecopy management switch 203 b generates a proxy SAN address, and then writes the association among a SAN address of thehost 105, a SAN address of thestorage device 104 a as an original, and the proxy SAN address to theproxy address entry 242 of the proxy address table 241. After that, thecopy management switch 203 b changes a source of the read request to the proxy SAN address before routing the frame. - The
storage device 104 b which has received the read request reads out copy data corresponding to the read request, and after changing the destination to the proxy SAN address, thestorage device 104 b transmits the read data to thecopy management switch 203 b. The frame containing the read data includes the proxy SAN address as a destination, and a SAN address of thestorage device 104 b as a source. - The instant that the frame containing the read data arrives at the
copy management switch 203 b, thecopy management switch 203 b changes a source of the read data to a SAN address of thestorage device 104 a, and also changes a destination to a SAN address of thehost 105, on the basis of information in the proxy address table 241. - After that, the frame containing the read data is routed according to the SAN address of the destination, and consequently arrives at the
host 105 via theswitch 102 a. Thehost 105 receives the read data as if it were transmitted from thestorage device 104 a. - On the other hand, the instant that a response from the
storage device 104 b arrives at thecopy management switch 203 b, thecopy management switch 203 b changes a source to a SAN address of thestorage device 104 a, and also changes a destination to a SAN address of thehost 105, before routing the response. At the same time, thecopy management switch 203 b deletes from the proxy address table 241 theproxy address entry 242 that stores the association. Thehost 105 receives the response as if it were transmitted from thestorage device 104 a. - In this embodiment, the
copy management switch 203 a exists on a path from thehost 105 to thestorage device 104 a holding the original data. However, in contrast to the first embodiment, thestorage device 104 b holding the copy data is not connected to thecopy management switch 203 a. Nevertheless, in this embodiment, the read request is converted into a read request to thestorage device 104 b holding the copy data, and subsequently arrives at thecopy management switch 203 b to which thestorage device 104 b holding the copy data is connected. As a result, the copy data is transmitted to thehost 105 as read data from thestorage device 104 a. - Additionally, in this embodiment, using a proxy SAN address makes it possible to differentiate a read request changed by the copy management switch 203 from a command such as a read request issued to the
storage device 104 b holding copy data. Hence, thestorage device 104 b having the copy data can be used as ausual storage device 104 b. Moreover, because each read request uses a unique proxy SAN address, it becomes possible to copy the original data held in a plurality of storage devices 104 to onestorage device 104 b, and then to use the copied data as copy data. - Incidentally, in this embodiment, the proxy SAN address is used to classify read data from the storage device 104 into read data whose address information is required to be translated, and read data whose address information is not required to be translated. However, if information that enables recognition of the association among a read request, read data, and a response is added to a frame, it is also possible to classify the read data by the additional information without using the proxy SAN address.
- For example, in the fiber channel protocol, an ID called an exchange ID is added to each frame. Accordingly, it is also possible to classify the read data and the response according to this information.
-
FIG. 9 is a diagram illustrating a configuration example of a computer system to which a third embodiment according to the present invention is applied. ASAN 101 comprisesswitches copy management switch 303. In addition, ahost 105, astorage device 104 a, and astorage device 104 b are connected to theSAN 101. Original data is stored in thestorage device 104 a. - It should be noted that this embodiment also based on the assumption that a connection cost for a communication line between the
host 105 and thestorage device 104 a is higher than that for a communication line between thehost 105 and thestorage device 104 b. - As is the case with the other embodiments described above, the user, or the administrator, of the system uses the
management terminal 106, which is connected to thecopy management switch 303, to copy original data, and then to write information about the association of the original data with the copy data to the copy management table 231 of thecopy management switch 303 for the purpose of managing the information. In this embodiment, in addition to the copy management table 231, a virtual address table 341, which will be described below, is also stored and managed. - Using the method described below, the
copy management switch 303 works for a device connected to thecopy management switch 303 as if there were a virtual storage device 104 (hereinafter referred to as a “virtual storage 307”). - In this embodiment, the
host 105 thereafter judges that the original data is stored in thevirtual storage 307, and thereby issues a read request to thevirtual storage 307. The reason why the read request is issued to thevirtual storage 307 is as follows: becausecopy management entries copy management switch 303 through themanagement terminal 106, thecopy management switch 303 changes a read request to thevirtual storage 307 into a read request to thestorage device 104 a or thestorage device 104 b depending on the presence or absence of copy data. This enables effective use of the network. It is to be noted that a WWN of thevirtual storage 307 is applied to thehost 105. Thehost 105 uses a name service to know two SAN addresses of thevirtual storage 307 judging from the WWN of thevirtual storage 307. - In this connection, a configuration of the
copy management switch 303 in this embodiment is the same as thecopy management switch 103 in the first embodiment. However, as far as the information, etc. stored in the memory of thecopy management switch 303 are concerned, there are points of difference from the second embodiment as below. - A first point of difference is that the
CPU 1121 executes an initialization program 321 (theinitialization program 121 used for the copy management switch 303) to read the virtual address table 341 described below from thenon-volatile storage 1123, and then to write to thename database 125 the address information of thevirtual storage 307 stored in the virtual address table 341. A second point of difference is that theCPU 1121 executes a management-terminal-submitted request processing program 322 (therequest processing program 122 used for the copy management switch 303) not only to perform the processing in the second embodiment, but also to change the virtual address table 341 held in thenon-volatile storage 1123 in response to a request that comes from themanagement terminal 106 and is received by themanagement port 113. -
FIG. 10 is a diagram illustrating contents of the virtual address table 341. The virtual address table 341 comprises a plurality of virtual address entries. 342. Thevirtual address entry 342 corresponds to one virtual node, for instance, thevirtual storage 307. A SAN address of the virtual node (hereinafter referred to as a “virtual SAN address”) and a virtual WWN (hereinafter referred to as a “virtual WWPN” and a “virtual WWNN”) are written toentries virtual address entry 342. - It is to be noted that the copy management table 231 and the proxy address table 241 used in this embodiment are the same as those used in the second embodiment. The address translation program 226 is also the same as that used in the second embodiment.
- A series of operation of the computer system according to this embodiment will be summarized as below.
- In the first place, the user or administrator of the system uses the
management terminal 106, which is connected to thecopy management switch 303, to instruct thecopy management switch 303 to copy original data stored in thestorage device 104 a to thestorage device 104 b. Subsequently, the virtual address table 341 and the copy management table 231 are set.FIGS. 10 and 11 illustrate values set in the copy management table 231, thecopy management entries virtual address entry 342. The values are set as follows: a SAN domain address of theswitch 102 a is 4; a SAN domain address of thecopy management switch 303 is 5; a SAN address of thestorage device 104 a is 401; a SAN address of thestorage device 104 b is 501; a SAN address assigned to thevirtual storage device 307 is 502; a WWPN is 1234; and a WWNN is 5678. In this case, original data is stored in an area having a length of 100000 and starting fromLUN 0 andLBA 0 of thestorage device 104 a, and then part of the original data, starting from the top of the original data and having a length of 50000, is copied to an area starting fromLUN 0 andLBA 0 of thestorage device 104 b. Thecopy management entry 232 a shows that a read request to read an area having a length of 50000 and starting fromLUN 0 andLBA 0 of thevirtual storage 307 is converted into a read request to thestorage device 104 b. On the other hand, thecopy management entry 232 b shows that a read request to read an area having a length of 50000 and starting fromLUN 0,LBA 50000 of thevirtual storage 307 is converted into a read request to thestorage device 104 a. - After the settings end, the
host 105 issues a read request to thevirtual storage 307. A frame of the read request includes a SAN address, LUN, LBA, a length, of thestorage device 307 as a destination, and a SAN address of thehost 105 as a source. The frame of the read request is routed according to the SAN address of the destination, and consequently arrives at thecopy management switch 303 via theswitch 102 a. - The
copy management switch 303 that has received the read request checks information included in the frame of the read request against information in the copy management table 231. As a result of the check, if the read request matches thecopy management entry 232 a, as is the case with the second embodiment, thecopy management switch 303 changes a destination of the read request, and thereby converts the read request into a read request to thestorage device 104 b having copy data. Further, thecopy management switch 303 generates a proxy address to change a source to the proxy address, and then transmits to thestorage device 104 b a frame containing the read request. After that, thecopy management switch 303 b writes the association among thehost 105, thevirtual storage 307, and the proxy address to theproxy address entry 242 of the proxy address table 241. Moreover, if the read request matches thecopy management entry 232 b as a result of the check, the read request is converted into a read request for the original data stored in thestorage device 104 a, and similar processing is performed thereafter. - The
storage device 104 b that has received the read request reads out specified data, and then sets the proxy SAN address as a destination before transmitting the read data. A frame of the read data includes the proxy SAN address indicating the destination, and a SAN address of thestorage device 104 b as a source. - On receiving the frame of the read data, the
copy management switch 303 changes a source of the frame of the read data to a SAN address of thevirtual storage 307, and also changes a destination to a SAN address of thehost 105, on the basis of information in the proxy address table 241. The read data is routed according to the SAN address of the destination, and consequently arrives at thehost 105 via theswitch 102 a. Thehost 105 receives the read data as if it were transmitted from thevirtual storage 307. - In the first and second embodiments, the
copy management switch 103 and the like exist on a network path (hereinafter referred to as a “path”) between thehost 105 and thestorage device 104 a holding the original data, and thecopy management switch 103 or the like changes the frame of the read request. However, because thecopy management switch 303 provides thevirtual storage 307 in this embodiment, a read request directly arrives at thecopy management switch 303 that does not exists on the path between thehost 105 and thestorage device 104 a. - Furthermore, the present embodiment can employ another configuration as shown in
FIG. 12 . - In this configuration, a
storage device 104 a is connected to acopy management switch 303 a. The administrator or the like uses amanagement terminal 106 to set a copy management table 231 and a virtual address table 341 of the copy management switches 303 a, 303 b so that each copy management switch provides avirtual storage 307. - In this case, the
virtual address entry 342 whose WWNN is equivalent is written to the virtual address table 341 of the copy management switches 303 a, 303 b so that when thehosts virtual storage 307 is recognized as a node having a plurality of ports. - In the
hosts hosts host 105 can access thevirtual storage 307 by use of any of the SAN addresses. Examples of methods for selecting one port from among a plurality of ports pointed by the plurality of SAN addresses could conceivably include two methods as described below. - One is that if the
host 105 can obtain topology information of theSAN 101 from the switch 102 or thecopy management switch 303, a port entailing a lower connection cost is selected, and then a read request is transmitted to the port. - The other is that if the
host 105 cannot obtain topology information, thehost 105 transmits a read request to both ports, and then a port which can make a faster access is selected from the both ports. - Incidentally, in the above-mentioned configuration, if the
storage device 104 b gets out of order when copying the whole original data to thestorage device 104 b, on the assumption that thecopy management switch 303 can detect a failure of the storage device 104, it is possible to provide such fail-safe operation that changing the copy management entry 232 permits an access request to be routed to thesafe storage device 104 a. - There are several methods by which the
copy management switch 303 can detect a failure of the storage device 104. For example, if an optical fiber is used as a communication line, the occurrence of a physical disconnection can be detected by extinction of light at the port 111. In addition, because a response from thestorage device 104 b includes error information, thecopy management switch 303 can also detect a failure of the storage device 104 by monitoring contents of the response at the port 111. An example of fail-safe will be described below. - The
copy management switch 303 that has detected a failure of the storage device 104 notifies themanagement terminal 106 of the occurrence of the failure. The user or the like then uses thecopy management switch 303 to set again the copy management table 231 of thecopy management switch 303. For example, if thestorage device 104 b inFIG. 9 gets out of order, the user uses themanagement terminal 106 to delete thecopy management entry 232 a of thecopy management switch 303, then to set theoriginal LBA 135 b and copyLBA 139 b of thecopy management entry 232 b at 0, and further, to set theoriginal length 136 b at 100000. - As a result, the
copy management switch 303 routes all read requests, which are issued from thehost 105 to thevirtual storage 307, to thestorage device 104 a. Similar failover processing can be performed also in the first and second embodiments. - Moreover, in the configuration shown in
FIG. 12 , it is possible to realize the failover processing by changing a port selected by thehost 105 depending on whether or not a timeout of the transmitted read request is encountered. -
FIG. 13 is a diagram illustrating a configuration example of a computer system to which a fourth embodiment according to the present invention is applied. This embodiment is different from the third embodiment in that copy management switches 403 a, 403 b provide avirtual switch 408. - A configuration of each of the
copy management switch copy management switch 303 in the third embodiment. However, information and the like stored in thememory 1122 possessed by each of the copy management switches 403 a, 403 b differ from those in the third embodiment in the following points: - Firstly, in addition to the virtual address table 341 in the third embodiment, an entry 446 for storing a virtual domain address is added to the virtual address table 441 (virtual address table 341 used for the copy management switch 403). A virtual domain address stored in the entry 446 indicates a SAN domain address of the
virtual switch 408. - Secondly, the
CPU 1121 executes the routing protocol processing program 423 (the routingprotocol processing program 123 used for the copy management switch 403). TheCPU 1121 then exchanges, with another switch, information about being connected to thevirtual switch 408 having a SAN domain address specified by a virtual domain address stored in the entry 446, and thereby creates a routing table. In this case, in order to ensure consistency of routing, a connection cost between thecopy management switch 403 a and thevirtual switch 408 and a connection cost between thecopy management switch 403 b and thevirtual switch 408 are sets so that they are equivalent to each other. - A flow of a series of processing in this embodiment will be described below.
- In the first place, the user or administrator of the system uses the
management terminal 106 connected to the copy management switches 403 a, 403 b to issue to thecopy management switch 403 a (or 403 b) an instruction to copy original data stored in thestorage device 104 a to thestorage device 104 b. Subsequently, the administrator or the like sets information in the virtual address table 441 and the copy management table 231 that are provided in each of the copy management switches 403 a, 403 b. - Here, in this embodiment, on the assumption that a SAN domain address of the
copy management switch 403 a is 4, a SAN domain address of thecopy management switch 403 b is 5, a SAN domain address of thevirtual switch 408 is 8, a SAN address of thestorage device 104 a is 401, a SAN address of thestorage device 104 b is 501, a SAN address assigned to thevirtual storage 307 is 801, a WWPN is 1234, and a WWNN is 5678, original data is stored in an area having a length of 100000 starting fromLUN 0 andLBA 0 of thestorage device 104 a, and then part of the original data, starting from the top of the original data and having a length of 50000, is copied to an area starting fromLUN 0 andLBA 0 of thestorage device 104 b. - Additionally, a connection cost between the
host 105 and each of the copy management switches 403 a, 403 b is assumed to be as follows: for thehost 105 a, a connection cost between thehost 105 a and thecopy management switch 403 b is lower than a connection cost between thehost 105 a and thecopy management switch 403 a. For thehost 105 b, a connection cost between thehost 105 b and thecopy management switch 403 a is lower than a connection cost between thehost 105 b and thecopy management switch 403 b. - When the
host 105 a issues a read request to thevirtual storage 307, the read request arrives at thecopy management switch 403 b as a result of the routing that can achieve the lowest connection cost. After that, processing which is the same as that in the third embodiment is performed. - When the
host 105 b issues a read request to thevirtual storage 307, the read request arrives at thecopy management switch 403 a, and then processing which is the same as that in the third embodiment is performed. - According to this embodiment, the
host 105 can transmit a read request to the copy management switch 403 whose connection cost is low without selecting a path as performed in the configuration inFIG. 12 . This reason is that since a connection cost between thecopy management switch 403 a and thevirtual switch 408 is equivalent to a connection cost between thecopy management switch 403 b and thevirtual switch 408, a frame which is transmitted from thehost 105 to a SAN domain of thevirtual switch 408 arrives at thecopy management switch host 105. -
FIG. 14 is a diagram illustrating a configuration example of a computer system to which a fifth embodiment according to the present invention is applied. This embodiment is different from the first embodiment in that acopy management switch 503 hasstorage devices - In this embodiment, the
storage device 104 a holds original data, and thehost 105 issues a read request for the original data. Additionally, in this embodiment, thecopy management switch 503 holds copy data. On receipt of the read request for the original data, which is issued from thehost 105, thecopy management switch 503 reads out the copy data held in thestorage devices host 105. -
FIG. 15 is a diagram illustrating a specific example of an internal configuration of thecopy management switch 503. - The
copy management switch 503 comprises the following: a protocol converter 5032 including a plurality of ports 5031 and a port processor 5033; a disk controller 5035; a hard disk 5036; a management unit 5037; and a switch unit 5034 for connecting these components. The port processor 5033 includes a CPU and a memory; and the management unit 5037 includes a CPU, a memory, and a storage device. - The disk controller 5035 provides another device with the storage capacity, which is obtained from a plurality of hard disks connected to the disk controller 5035, as one or a plurality of logical storage areas.
- The
copy management switch 503 gathers the logical storage areas, which are provided by the plurality of disk controllers 5035 included in thecopy management switch 503 itself, into one or a plurality of virtual storage areas, and then provides the virtual storage area or areas to a device connected to thecopy management switch 503. - The
copy management switch 503 transmits/receives a command and data to/from other nodes through the ports 5031. The protocol converter 5032, which has received the command, and the like, through the port 5031, converts a protocol used for the command and the data which have been received, and then transmits them to the switch unit 5034. Here, the protocol converter 5032 judges whether or not the received command is targeted for the virtual storage area provided by thecopy management switch 503. If the received command is targeted for its own storage area, the protocol converter 5032 issues a command to the disk controller 5035 corresponding to the storage area. If the received command is not targeted at its own storage area, the protocol converter 5032 transmits a received frame to the switch unit 5034 just as it is. - Information indicating the association of the virtual storage area provided by the
copy management switch 503 with the logical storage area provided by the disk controller 5035 is stored in the storage device of the management unit 5037. In addition, when thecopy management switch 503 is started-up, the CPU of the management unit 5037 stores this information in a memory possessed by the port processor 5033. Further, the management unit 5037 holds a name database in a memory inside the management unit 5037, and thereby executes, in the CPU inside, processing that responds to an inquiry about WWNN and the like. - The switch unit 5034 performs routing according to address information of the frame.
- The disk controller 5035 holds, in an internal memory, information about the association between the logical storage area to be provided and the storage area included in each hard disk 5036 connected. On receipt of a command from the switch unit 5034 (here, the command is stored in the frame), the disk controller 5035 determines a hard disk 5036 corresponding to a storage area specified by the command, and also a stored location in the hard disk 5036, using the information about the association held in the memory. Then, the disk controller 5035 issues a data read command and the like to the corresponding hard disk 5036 so that the command is handled.
- Incidentally, the reason why the switch unit 5034, the disk controller 5035, and the management unit 5037 are duplicated in
FIG. 15 is to achieve redundancy so that reliability is improved. Hence, this configuration is not always required for the present invention. - In the
copy management switch 503 according to this embodiment, theaddress translation program 126 is stored in a memory possessed by the port processor 5033 of the protocol converter 5032. If the port 5031 of the protocol converter 5032 receives a frame, or if the switch unit 5034 receives a frame, theaddress translation program 126 is executed in theport processor 5033 a. In addition, a memory in the management unit 5037 stores the programs that are stored in thememory 1122 described in the first embodiment. - A flow of a series of processing executed in the computer system according to this embodiment will be described below.
- To begin with, the
host 105 issues a read request for original data held in thestorage device 104 a. - On receiving the read request, the protocol converter 5032 of the
copy management switch 503 first executes theaddress translation program 126 in the port processor 5033, and then converts a received frame containing the read request into a frame containing a read request for a storage area that stores copy data. - The protocol converter 5032 then checks contents of the read request converted. In this embodiment, the read request is targeted for the storage area provided by the
copy management switch 503. Accordingly, the protocol converter 5032 transmits the converted frame containing the read request to the disk controller 5035. - The disk controller 5035 that has received the read request through the switch unit 5034 reads out specified data from the hard disk 5036 according to the received read request, and then transmits the read data to the protocol converter 5032 through the switch unit 5034.
- The protocol converter 5032 which has received the read data executes the
address translation program 126 using the port processor 5033, and thereby changes a source of the read data to a SAN address of thestorage device 104 a. Then, the protocol converter 5032 transmits the changed frame containing the read data to thehost 105 through the port 5031. Thehost 105 receives the read data as if it were transmitted from thestorage device 104 a. - In this manner, the read request for the original data stored in the
storage device 104 a, which has been issued from thehost 105, is handled by thecopy management switch 503 using the copy data stored in the storage area that is provided by thecopy management switch 503. - In this embodiment, the storage area provided by the
copy management switch 503 is used to hold the copy data. However, thecopy management switch 503 in this embodiment can be used in the same manner as thecopy management switch 103 described in the above-mentioned embodiments, i.e., from the first to fourth embodiments. - Next, a sixth embodiment of the present invention will be described below. In the above-mentioned embodiments, i.e., from the first to fifth embodiments, the whole original data or a specified part of the original data is copied. Moreover, according to an instruction by the administrator or the like, data is copied concurrently with the settings of the copy management tables 131, 231. However, in the sixth embodiment describe here, according to a read request from the
host 105, a copy management switch 703 described below copies data, which is a target of the read request, from thestorage device 104 a storing original data to thestorage device 104 b specified. The operation performed in this manner enables efficient use of the storage capacity possessed by thestorage device 104 b. - A configuration of a computer system in the sixth embodiment is the same as that in the first embodiment except that the
copy management switch 103 is replaced by the copy management switch 703. - In this embodiment, this embodiment is different from the first embodiment in information and the like stored in
memories - Firstly, the
CPU 1121 executes the initialization program 721 to create a cache table 741 corresponding to acache index 737 of eachcopy management entry 732 of a copy management table 731 described later. The cache table 741 will be described later. - Secondly, the
CPU 1121 executes a management-terminal-submitted request processing program 722 (the management-terminal-submittedrequest processing program 122 used for the copy management switch 703), and in response to an addition or deletion of thecache index 737 resulting from the change of contents of the copy management table 731, the cache table 741 is added or deleted. - Thirdly, the
CPU 1141 executes the address translation program 726 (theaddress translation program 126 used for the copy management switch 703). When the port 111 receives a frame, the CPU judges address information included in a read request, and then instructs thecontroller 112 to execute a cache processing program 727. Moreover, theCPU 1141 translates the address information about read data and the like. Details in the series of processing will be described later. - Fourthly, the
CPU 1121 executes the cache processing program 727, and then, by use of the cache table 741, makes a judgment as to whether or not there exists copy data corresponding to the read request for the original data. If the copy data exists, then theCPU 1121 issues a read request to the storage device 104 storing the copy data (in this case, thestorage device 104 b). On the other hand, if the copy data does not exists, theCPU 1121 copies the original data, which is specified by the read request, from thestorage device 104 a to thestorage device 104 b, and then transmits the copy data to thehost 105. Details in the series of processing will be described later. -
FIG. 16 is a diagram illustrating the configuration of the copy management table 731. The copy management table 731 has a plurality ofcopy management entries 732. In contrast to the first embodiment, eachcopy management entry 732 uses a SAN address and a LUN to manage the association between a stored location of original data and a location to which data is copied. To be more specific, eachcopy management entry 732 comprises anentry 733 for storing an original SAN address, anentry 734 for storing an original LUN, anentry 735 for storing a copy SAN address, anentry 736 for storing a copy LUN, and anentry 737 for storing cache index information. In other words, thecopy management entry 732 specifies: thestorage device 104 a that stores original data; a LUN of the original data; thestorage device 104 b used to store copy data; and a LUN prepared for the copy data. - It should be noted that the cache index is information used to specify the cache table 741 described below.
-
FIG. 17 is a diagram illustrating an example of how the cache table 741 is configured. The number of the cache tables 741 is equivalent to the number of thecopy management entries 732 used in the copy management table 731. The cache table 741 is associated with the copy management table 731 on the basis of a cache index stored in theentry 737. Each cache table 741 has a plurality ofcache entries 742. The number of thecache entries 742 in each cache table 741 is determined by the capacity of thememory 1122, and the like, when the system is designed. - The
cache entry 742 comprises anentry 743 for storing an original LBA, anentry 744 for storing a copy LBA, anentry 745 for storing an original length, anentry 746 for storing a non-access counter, and anentry 747 for storing a validity flag. - The non-access counter is a counter used in the cache processing program 727. How the non-access counter is used will be described later. The validity flag indicates that the
cache entry 742 to which the validity flag belongs is valid or invalid. For instance, if the validity flag is 1, it indicates that thecache entry 742 is valid; and if the validity flag is 0, it indicates that thecache entry 742 is invalid. - In other words, the valid cache entry 742 (that is to say, 1 is stored in the entry 747) indicates that a storage area of the
storage device 104 a, starting from the original LBA and having a length of the original length, is copied to a storage area of thestorage device 104 b, starting from the copy LBA. -
FIG. 18 is a flowchart illustrating a series of processing executed in the address translation unit 114 according to this embodiment. Incidentally, the series of processing is performed when theCPU 1141 executes the address translation program 726. - In the first place, the instant the port 111 receives a frame, the
CPU 1141 starts execution of the address translation program 726 (step 751). - Next, the
CPU 1141 judges whether or not the received frame is intended for a read request (step 752). If the received frame is intended for a read request, theCPU 1141 judges whether or not data requested by the read request is data stored in a LUN of the storage device 104 as a target to be copied (more specifically, original data). To be more specific, theCPU 1141 searches for thecopy management entry 732 in which a destination SAN address, and a LUN, of the read request match values stored in theentries 733, 734 (step 753). - If the
copy management entry 732 satisfying the condition is found instep 753, theCPU 1141 instructs thecontroller 112 to execute the cache processing program 727. - At this time, the
CPU 1141 transmits to thecontroller 112 the received frame, and information stored in thecopy management entry 732 that has been selected in step 753 (step 754). The instant the processing ofstep 754 ends, theCPU 1141 ends the series of processing. - On the other hand, if it is judged that the frame received in
step 752 is not intended for a read request, theCPU 1141 judges whether or not the frame is intended for read data or a response (step 755). If the frame is intended for read data or a response, theCPU 1141 judges whether or not the received frame is transmitted from the storage device 104 (in this case, 104 b) that stores copy data. To be more specific, theCPU 1141 judges whether or not the copy management table 731 has thecopy management entry 732 whosefield 735 value stored in the field agrees with the frame source (step 756). - If the
copy management entry 732 satisfying the condition exists in the copy management table 731 instep 756, theCPU 1141 uses thecopy management entry 732, which has been found instep 756, to change a source of the frame to an original SAN address stored in the entry 733 (step 757). - If a destination of the frame as the read request does not exist in the copy management table 731 in
step 753, if it is judged instep 755 that the frame is neither read data nor a response, or if a source of the frame does not exist in the copy management table 731 instep 756, theCPU 1141 transmits the frame completing the processing to theswitch processing unit 115 before ending the series of processing (step 758). -
FIG. 19 is a diagram illustrating a series of processing executed by thecontroller 112 according to an instruction from the address translation unit 114 instep 754. The series of processing proceeds when thecontroller 112 executes the cache processing program 727. - Upon receipt of instruction from the address translation unit 114, the
controller 112 judges whether or not a storage area of thestorage device 104 a storing original data specified by a LBA, and a length, of the read request, which is contents of the frame received from the address translation unit 114, is stored in thestorage device 104 b in which copy data is stored. To be more specific, in the cache table 741 specified by a value stored in theentry 737 of thecopy management entry 732, which has been transmitted from the address translation unit 114, theCPU 1121 judges whether or not an area specified by a LBA, and a length, of the read request is included in an area specified by theentries cache entry 742 whose validity flag is 1 exists (step 761). - If the
cache entry 742 satisfying the condition ofstep 761 exists, theCPU 1121 updates anon-access counter 746 of avalid cache entry 742 included in the cache table 741 used instep 761. More specifically, theCPU 1121 sets a value of theentry 746 in thecache entry 742 satisfying the condition ofstep 761 at 0, and then increments by one a value stored in theentry 746 in each of all other valid cache entries 742 (step 762). - After that, the
CPU 1121 generates a read request for copy data. To be more specific, theCPU 1121 changes values stored in theentries copy management entry 732, transmitted from the address translation unit 114, to a destination SAN address and a LUN respectively. Moreover, theCPU 1121 uses information stored in thecache entry 742 found instep 761 to change the LBA to a value determined by (a LBA specified by the read request+a value stored in theentry 744−a value store in the entry 743), and also to change the length to a value stored in the entry 745 (step 763). TheCPU 1121 then transmits the processed frame to the switch processing unit 115 (step 764). - On the other hand, if the
cache entry 742 satisfying the condition ofstep 761 does not exists, theCPU 1121 judges whether or not a vacant area enough to store data having a length specified by the read request exists in a storage area of thestorage device 104 b for storing copy data specified by thecopy management entry 732. Further, theCPU 1121 judges whether or not the cache table 741 includes thecache entry 742 that is not used. More specifically, by use of information about allvalid cache entries 742 of the cache table 741, theCPU 1121 checks storage areas currently used in thestorage device 104 b, and thereby finds out from thestorage device 104 b a free storage area having a length greater than or equal to the length value included in the read request (step 765). - If a storage area which is not registered in any
valid cache entry 742 and has a length greater than or equal to the length value included in the read request is not found in thestorage device 104 b, or if thecache entry 742 cannot be updated because allcache entries 742 are valid, theCPU 1121 deletes or invalidates one of thevalid cache entries 742 in the cache table 741 to extend a free storage area. - To be more specific, the
CPU 1121 finds thecache entry 742, theentry 746 of which has the largest value among those stored in theentries 746 of thevalid cache entries 742, and then sets the value of theentry 747 of the foundcache entry 742 at 0 (step 766). After that, theCPU 1121 repeats the processing ofstep 765. - If a storage area which is not registered in any
valid cache entry 742 and has a length greater than or equal to the length value included in the read request is found in thestorage device 104 b, and the number ofinvalid cache entries 742 found is one or more, theCPU 1121 updates the foundcache entry 742, and then stores the association of original data specified by the read request with the found storage area of thestorage device 104 b. Accordingly, first of all, the original data is copied. To begin with, if an appropriate storage area is found and at least oneinvalid cache entries 742 is found, then theCPU 1121 reads out original data, which is specified by the read request, from thestorage device 104 a that holds the original data. More specifically, theCPU 1121 creates a read request for the original data, in which a source is a SAN address of thecontroller 112, and then transmits the read request to theswitch processing unit 115. Subsequently, theCPU 1121 stores in thememory 1122 the read data that has been transmitted from the storage device holding the original data (step 767). - After that, the
CPU 1121 transmits the original data stored in thememory 1122 to the storage area of thestorage device 104 b which has been found in step 765 (step 768). TheCPU 1121 then updates the cache table 741. To be more specific, as for theinvalid cache entry 742 found instep 765, theCPU 1121 stores in the entry 744 a copy LBA corresponding to the storage area that stores data transmitted from thememory 1122, and also stores in therespective entries CPU 1121 sets a value of theentry 746 in thecache entry 742 at 0, and sets a value of theentry 747 at 1 (step 769). After that, theCPU 1121 executes processing ofstep 763 and beyond. - A series of operation according to this embodiment will be summarized as below.
- To begin with, the user or administrator of the system uses the
management terminal 106 connected to the copy management switch 703 to make settings in the copy management table 731 so that the original data stored in a storage area specified by a LUN in thestorage device 104 a is copied to a storage area specified by a LUN in thestorage device 104 b. At this point of time, data is in a state of not yet having been copied to thestorage device 104 b. In other words, although thecopy management entry 732 of the copy management table 731 exists, all thecache entries 742 of the cache table 741 corresponding to thecache index 737 of thecopy management entry 732 are invalid, which represents an initial state. - If the
host 105 issues a read request for the original data held in thestorage device 104 a in the initial state, a frame containing the read request arrives at the copy management switch 703. - On detecting the receipt of the frame, the address translation unit 114 starts the address translation program 726. Here, the address translation unit 114 judges that the read request is a request for the original data, and therefore instructs the
controller 112 to execute the cache processing program 727. - The instructed
controller 112 then searches for thecache entry 742 corresponding to the data specified by the read request. At this point of time, because the data is not yet be transmitted to thestorage device 104 b, the correspondingcache entry 742 is not found. For this reason, thecontroller 112 issues a read request to thestorage device 104 a, and then transfers its read data to thestorage device 104 b. In addition, thecontroller 112 stores in the cache table 741 thecache entry 742 corresponding to the data transferred to thestorage device 104 b. - Moreover, the
controller 112 generates, from the received frame containing the read request, a frame containing a read request for the copy data whose source is thehost 105. Then, thecontroller 112 transmits the generated frame to theswitch processing unit 115. - The
switch processing unit 115 transmits the frame to thestorage device 104 b. Thestorage device 104 b, which has received the frame, then transmits a frame of read data whose destination is thehost 105 to thehost 105 via the copy management switch 703. - In this case, the address translation unit 114, which has received the frame of read data, judges the read data of the received frame to be copy data. The address translation unit 114 changes a source of the read data to the
storage device 104 a, and then transfers the read data to theswitch processing unit 115. - The
switch processing unit 115 transfers the read data to thehost 105. The read data arrives at thehost 105 as data transmitted from thestorage device 104 a. - As a result of the flow of the processing described above, a copy is created in the
storage device 104 b according to the read request from thehost 105, and consequently thecache entry 742 is created in the cache table 741. - After that, if the
host 105 issues a read request for the same original data again, thecontroller 112 receives a frame containing the read request, and thereby can find acache entry 742 corresponding to the original data. In this case, thecontroller 112 generates a read request for copy data whose source is thehost 105 according to the frame of the read request, and then transmits the frame to theswitch processing unit 115. - In this manner, the second read request is handled by use of only the copy data stored in the
storage device 104 b. This speeds up the processing of the read request. Additionally, in this embodiment, only the original data that is actually accessed by the host is copied. Moreover, thenon-access counter 746 is used to delete, from thecache entry 742, copy data that has not been accessed for the longest time. In other words, only data whose frequency of accesses from the host is high is held in thestorage device 104 b. Thus, this embodiment enables efficient use of the storage capacity of thestorage device 104 b. - The method used to delete the
cache entry 742, described in this embodiment, is a method that is in general called LRU (Least Recently Used) However, another method may also be used. - Moreover, in this embodiment, the copy management device reads out from the
storage device 104 a original data specified by a read request issued from thehost 105, and then stores the read data in thememory 1122 before transferring the read data to thestorage device 104 b for storing copy data. However, not only the original data specified by the read request issued from thehost 105 but also data before and behind the original data may also be read out from thestorage device 104 a. - With respect to the storage area specified by a read request issued from the
host 105, data may be read out from a storage area that starts from the same starting location as that of the specified storage area and is longer than the specified length. Reading out data in this manner increases a possibility that if thehost 105 issues read requests for consecutive areas, data requested by the next read request will be found in thestorage device 104 b. Further, the whole storage area specified by a SAN address and a LUN may also be read. - In addition, if the
storage device 104 a can handle the EXTENDED COPY command of SCSI, a data read request for thestorage device 104 a may also be replaced with a data copy request for thestorage device 104 a to thestorage device 104 b. Moreover, thecontroller 112 may also be provided with a dedicated buffer memory used to transfer original data to thestorage device 104 b for storing copy data. Furthermore, read data may also be transmitted to thehost 105 concurrently with transferring the read data to thestorage device 104 b that holds copy. - According to the present invention, it is possible to speed up an access to data held in a storage device connected to a SAN.
- Further, according to the present invention, since the amount of data flowing through a SAN can be reduced, a load on the SAN can be reduced.
Claims (12)
1. A network system comprising:
a computer;
a switch that is connected to said computer via a network;
a first storage device that is connected to said switch via the network; and
a second storage device that is connected to said switch via the network;
wherein said switch beforehand transfers data stored in said first storage device to said second storage device;
said computer issues a read request for the data stored in said first storage device;
when receiving said read request, said switch converts said read request for the data stored in said first storage device into a data read request to said second storage device, and then transmits the converted data read request to said second storage device;
when receiving said data read request, said second storage device transfers data corresponding to the received data read request to said switch; and
when receiving the data, said switch transfers the received data to said computer as data transferred from said first storage device.
2. A network system according to claim 1 , further comprising a second computer that is connected to said switch;
wherein said switch transfers data stored in said first storage device to said second storage device according to an instruction from said second computer.
3. A network system according to claim 1 , wherein:
when converting the data read request to said first storage device into the data read request to said second storage device, said switch converts information indicating a source of said data read request into another information, and then transmits the converted data read request including the another information to said second storage device; and
when receiving, from said second storage device, data corresponding to the converted data read request, said switch converts said another information included as a destination of the data into information used for said computer.
4. A network system comprising:
a computer;
a switch that is connected to said computer via a network;
a first storage device that is connected to said switch via the network; and
a second storage device that is connected to said switch via the network;
wherein said switch beforehand transfers data stored in said first storage device to said second storage device;
said switch provides said computer with a third storage device corresponding to said first storage device, said third storage device being a virtual storage;
said computer issues a data read request to said third storage device;
when receiving said data read request, said switch converts the data read request to said third storage device into a data read request to said second storage device, and then transmits the converted data read request to said second storage device;
when receiving said data read request, said second storage device transfers, to said switch, data corresponding to the received data read request; and
when receiving the data, said switch transfers the received data to said computer as data transferred from said third storage device.
5. A network system according to claim 4 , wherein a domain address that is the same as that of said second storage device is assigned to said third storage device that is the virtual storage.
6. A network system comprising:
a computer;
a first storage device that is connected to said computer via a network; and
a second storage device that is connected to said computer via the network;
wherein said second storage device comprises a switch unit that is connected to said computer and said first storage device via the network, and a storage unit that is connected to said switch unit via an internal network;
said switch unit beforehand transfers data stored in said first storage device to said storage unit;
said computer issues a read request for the data stored in said first storage device;
when receiving said read request, said switch unit converts the read request for the data stored in said first storage device into a data read request to said storage unit, and then transmits the converted data read request to said storage unit;
when receiving said data read request, said storage unit transfers, to said switch unit, data corresponding to the received data read request; and
when receiving the data, said switch unit transfers the received data to said computer as data transferred from said first storage device.
7. A network system comprising:
a computer;
a switch that is connected to said computer via a network;
a first storage device that is connected to said switch via the network; and
a second storage device that is connected to said switch via the network;
wherein said computer issues a read request for the data stored in said first storage device;
when said switch receives said read request, if the data stored in said first storage device is stored in said second storage device, said switch converts said read request for the data stored in said first storage device into a data read request to said second storage device, and then transmits the converted data read request to said second storage device, whereas if the data stored in said first storage device is not stored in said second storage device, said switch transmits said read request to said first storage device without converting said read request for the data;
when receiving said data read request, said second storage device transfers, to said switch, data corresponding to the received data read request; and
when receiving the data, said switch transfers the received data to said computer as data transferred from said first storage device.
8. A network system according to claim 7 , wherein said switch has information indicating whether or not data stored in said first storage device is stored in said second storage device.
9. A network system according to claim 8 , wherein if the data stored in said first storage device is not stored in said second storage device, said switch transfers the data that has been transferred from said first storage device, to said second storage device in response to said read request for the data, and then updates said information.
10. A network system according to claim 9 , wherein when said switch transfers the data that has been transferred from said first storage device, to said second storage device, if an amount of free storage capacity in said second storage device is not enough to store the data, said switch deletes some amount of data currently stored in said second storage device in a manner that data with the least frequency of use by said computer is deleted first, thereby transfers the data to said second storage device, and then updates said information.
11. A switch that is connected to a computer, a first storage device, and a second storage device, said switch comprising:
a port unit that is connected to an external device;
a converter for converting commands and data which have been received by said port unit; and
a switch unit for relaying said command and said data according to address information;
wherein said converter beforehand transfers data stored in said first storage device to said second storage device, and when receiving from said computer an access request for the data stored in said first storage device, said converter converts the access request into an access request to said second storage device;
said switch unit transmits to said second storage device through said port unit the access request to said second storage device; and
when receiving data corresponding to said access request from said second storage device, said converter converts the data into data transmitted from said first storage device, and then transfers the converted data to said computer.
12. A switch according to claim 11 , wherein a second computer is connected to said switch, and said converter transfers the data stored in said first storage device to said second storage device according to an instruction from said second computer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/407,167 US20060187908A1 (en) | 2003-06-18 | 2006-04-18 | Network system and its switches |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003172770A JP4278445B2 (en) | 2003-06-18 | 2003-06-18 | Network system and switch |
JP2003-172770 | 2003-06-18 | ||
US10/646,036 US7124169B2 (en) | 2003-06-18 | 2003-08-22 | Network system and its switches |
US11/407,167 US20060187908A1 (en) | 2003-06-18 | 2006-04-18 | Network system and its switches |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/646,036 Continuation US7124169B2 (en) | 2003-06-18 | 2003-08-22 | Network system and its switches |
Publications (1)
Publication Number | Publication Date |
---|---|
US20060187908A1 true US20060187908A1 (en) | 2006-08-24 |
Family
ID=33410941
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/646,036 Expired - Lifetime US7124169B2 (en) | 2003-06-18 | 2003-08-22 | Network system and its switches |
US11/407,167 Abandoned US20060187908A1 (en) | 2003-06-18 | 2006-04-18 | Network system and its switches |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/646,036 Expired - Lifetime US7124169B2 (en) | 2003-06-18 | 2003-08-22 | Network system and its switches |
Country Status (3)
Country | Link |
---|---|
US (2) | US7124169B2 (en) |
EP (1) | EP1489524B1 (en) |
JP (1) | JP4278445B2 (en) |
Cited By (90)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060218349A1 (en) * | 2005-03-24 | 2006-09-28 | Fujitsu Limited | Device and method for caching control, and computer product |
US20090138577A1 (en) * | 2007-09-26 | 2009-05-28 | Nicira Networks | Network operating system for managing and securing networks |
US8015266B1 (en) * | 2003-02-07 | 2011-09-06 | Netapp, Inc. | System and method for providing persistent node names |
US8341308B2 (en) | 2008-06-09 | 2012-12-25 | International Business Machines Corporation | Method and apparatus for a fibre channel N-port ID virtualization protocol |
US8743889B2 (en) | 2010-07-06 | 2014-06-03 | Nicira, Inc. | Method and apparatus for using a network information base to control a plurality of shared network infrastructure switching elements |
US9225597B2 (en) | 2014-03-14 | 2015-12-29 | Nicira, Inc. | Managed gateways peering with external router to attract ingress packets |
US9306910B2 (en) | 2009-07-27 | 2016-04-05 | Vmware, Inc. | Private allocated networks over shared communications infrastructure |
US9313129B2 (en) | 2014-03-14 | 2016-04-12 | Nicira, Inc. | Logical router processing by network controller |
US9385954B2 (en) | 2014-03-31 | 2016-07-05 | Nicira, Inc. | Hashing techniques for use in a network environment |
US9407580B2 (en) | 2013-07-12 | 2016-08-02 | Nicira, Inc. | Maintaining data stored with a packet |
US9413644B2 (en) | 2014-03-27 | 2016-08-09 | Nicira, Inc. | Ingress ECMP in virtual distributed routing environment |
US9419855B2 (en) | 2014-03-14 | 2016-08-16 | Nicira, Inc. | Static routes for logical routers |
US20160335199A1 (en) * | 2015-04-17 | 2016-11-17 | Emc Corporation | Extending a cache of a storage system |
US9503321B2 (en) | 2014-03-21 | 2016-11-22 | Nicira, Inc. | Dynamic routing for logical routers |
US9503371B2 (en) | 2013-09-04 | 2016-11-22 | Nicira, Inc. | High availability L3 gateways for logical networks |
US9548924B2 (en) | 2013-12-09 | 2017-01-17 | Nicira, Inc. | Detecting an elephant flow based on the size of a packet |
US9569368B2 (en) | 2013-12-13 | 2017-02-14 | Nicira, Inc. | Installing and managing flows in a flow table cache |
US9571386B2 (en) | 2013-07-08 | 2017-02-14 | Nicira, Inc. | Hybrid packet processing |
US9577845B2 (en) | 2013-09-04 | 2017-02-21 | Nicira, Inc. | Multiple active L3 gateways for logical networks |
US9575782B2 (en) | 2013-10-13 | 2017-02-21 | Nicira, Inc. | ARP for logical router |
US9590901B2 (en) | 2014-03-14 | 2017-03-07 | Nicira, Inc. | Route advertisement by managed gateways |
US9602398B2 (en) | 2013-09-15 | 2017-03-21 | Nicira, Inc. | Dynamically generating flows with wildcard fields |
US9647883B2 (en) | 2014-03-21 | 2017-05-09 | Nicria, Inc. | Multiple levels of logical routers |
US9697032B2 (en) | 2009-07-27 | 2017-07-04 | Vmware, Inc. | Automated network configuration of virtual machines in a virtual lab environment |
US9742881B2 (en) | 2014-06-30 | 2017-08-22 | Nicira, Inc. | Network virtualization using just-in-time distributed capability for classification encoding |
US9768980B2 (en) | 2014-09-30 | 2017-09-19 | Nicira, Inc. | Virtual distributed bridging |
US9887960B2 (en) | 2013-08-14 | 2018-02-06 | Nicira, Inc. | Providing services for logical networks |
US9893988B2 (en) | 2014-03-27 | 2018-02-13 | Nicira, Inc. | Address resolution using multiple designated instances of a logical router |
US9900410B2 (en) | 2006-05-01 | 2018-02-20 | Nicira, Inc. | Private ethernet overlay networks over a shared ethernet in a virtual environment |
US9952885B2 (en) | 2013-08-14 | 2018-04-24 | Nicira, Inc. | Generation of configuration files for a DHCP module executing within a virtualized container |
US9967199B2 (en) | 2013-12-09 | 2018-05-08 | Nicira, Inc. | Inspecting operations of a machine to detect elephant flows |
US9996467B2 (en) | 2013-12-13 | 2018-06-12 | Nicira, Inc. | Dynamically adjusting the number of flows allowed in a flow table cache |
US10020960B2 (en) | 2014-09-30 | 2018-07-10 | Nicira, Inc. | Virtual distributed bridging |
US10038628B2 (en) | 2015-04-04 | 2018-07-31 | Nicira, Inc. | Route server mode for dynamic routing between logical and physical networks |
US10057157B2 (en) | 2015-08-31 | 2018-08-21 | Nicira, Inc. | Automatically advertising NAT routes between logical routers |
US10063458B2 (en) | 2013-10-13 | 2018-08-28 | Nicira, Inc. | Asymmetric connection with external networks |
US10079779B2 (en) | 2015-01-30 | 2018-09-18 | Nicira, Inc. | Implementing logical router uplinks |
US10091161B2 (en) | 2016-04-30 | 2018-10-02 | Nicira, Inc. | Assignment of router ID for logical routers |
US10095535B2 (en) | 2015-10-31 | 2018-10-09 | Nicira, Inc. | Static route types for logical routers |
US10129142B2 (en) | 2015-08-11 | 2018-11-13 | Nicira, Inc. | Route configuration for logical router |
US10153973B2 (en) | 2016-06-29 | 2018-12-11 | Nicira, Inc. | Installation of routing tables for logical router in route server mode |
US10181993B2 (en) | 2013-07-12 | 2019-01-15 | Nicira, Inc. | Tracing network packets through logical and physical networks |
US10193806B2 (en) | 2014-03-31 | 2019-01-29 | Nicira, Inc. | Performing a finishing operation to improve the quality of a resulting hash |
US10200306B2 (en) | 2017-03-07 | 2019-02-05 | Nicira, Inc. | Visualization of packet tracing operation results |
US10212071B2 (en) | 2016-12-21 | 2019-02-19 | Nicira, Inc. | Bypassing a load balancer in a return path of network traffic |
CN109361714A (en) * | 2018-12-18 | 2019-02-19 | 中国移动通信集团江苏有限公司 | User logs in method for authenticating, device, equipment and computer storage medium |
US10225184B2 (en) | 2015-06-30 | 2019-03-05 | Nicira, Inc. | Redirecting traffic in a virtual distributed router environment |
US10237123B2 (en) | 2016-12-21 | 2019-03-19 | Nicira, Inc. | Dynamic recovery from a split-brain failure in edge nodes |
US10250443B2 (en) | 2014-09-30 | 2019-04-02 | Nicira, Inc. | Using physical location to modify behavior of a distributed virtual network element |
US10333849B2 (en) | 2016-04-28 | 2019-06-25 | Nicira, Inc. | Automatic configuration of logical routers on edge nodes |
US10341236B2 (en) | 2016-09-30 | 2019-07-02 | Nicira, Inc. | Anycast edge service gateways |
US10374827B2 (en) | 2017-11-14 | 2019-08-06 | Nicira, Inc. | Identifier that maps to different networks at different datacenters |
US10454758B2 (en) | 2016-08-31 | 2019-10-22 | Nicira, Inc. | Edge node cluster network redundancy and fast convergence using an underlay anycast VTEP IP |
US10469342B2 (en) | 2014-10-10 | 2019-11-05 | Nicira, Inc. | Logical network traffic analysis |
US10484515B2 (en) | 2016-04-29 | 2019-11-19 | Nicira, Inc. | Implementing logical metadata proxy servers in logical networks |
US10498638B2 (en) | 2013-09-15 | 2019-12-03 | Nicira, Inc. | Performing a multi-stage lookup to classify packets |
US10511459B2 (en) | 2017-11-14 | 2019-12-17 | Nicira, Inc. | Selection of managed forwarding element for bridge spanning multiple datacenters |
US10511458B2 (en) | 2014-09-30 | 2019-12-17 | Nicira, Inc. | Virtual distributed bridging |
US10560320B2 (en) | 2016-06-29 | 2020-02-11 | Nicira, Inc. | Ranking of gateways in cluster |
US10608887B2 (en) | 2017-10-06 | 2020-03-31 | Nicira, Inc. | Using packet tracing tool to automatically execute packet capture operations |
US10616045B2 (en) | 2016-12-22 | 2020-04-07 | Nicira, Inc. | Migration of centralized routing components of logical router |
US10637800B2 (en) | 2017-06-30 | 2020-04-28 | Nicira, Inc | Replacement of logical network addresses with physical network addresses |
US10659373B2 (en) | 2014-03-31 | 2020-05-19 | Nicira, Inc | Processing packets according to hierarchy of flow entry storages |
US10681000B2 (en) | 2017-06-30 | 2020-06-09 | Nicira, Inc. | Assignment of unique physical network addresses for logical network addresses |
US10742746B2 (en) | 2016-12-21 | 2020-08-11 | Nicira, Inc. | Bypassing a load balancer in a return path of network traffic |
US10797998B2 (en) | 2018-12-05 | 2020-10-06 | Vmware, Inc. | Route server for distributed routers using hierarchical routing protocol |
US10841273B2 (en) | 2016-04-29 | 2020-11-17 | Nicira, Inc. | Implementing logical DHCP servers in logical networks |
US10931560B2 (en) | 2018-11-23 | 2021-02-23 | Vmware, Inc. | Using route type to determine routing protocol behavior |
US10938788B2 (en) | 2018-12-12 | 2021-03-02 | Vmware, Inc. | Static routes for policy-based VPN |
US11095480B2 (en) | 2019-08-30 | 2021-08-17 | Vmware, Inc. | Traffic optimization using distributed edge services |
US11178051B2 (en) | 2014-09-30 | 2021-11-16 | Vmware, Inc. | Packet key parser for flow-based forwarding elements |
US11190463B2 (en) | 2008-05-23 | 2021-11-30 | Vmware, Inc. | Distributed virtual switch for virtualized computer systems |
US11196628B1 (en) | 2020-07-29 | 2021-12-07 | Vmware, Inc. | Monitoring container clusters |
US11201808B2 (en) | 2013-07-12 | 2021-12-14 | Nicira, Inc. | Tracing logical network packets through physical network |
US11336533B1 (en) | 2021-01-08 | 2022-05-17 | Vmware, Inc. | Network visualization of correlations between logical elements and associated physical elements |
US11451413B2 (en) | 2020-07-28 | 2022-09-20 | Vmware, Inc. | Method for advertising availability of distributed gateway service and machines at host computer |
WO2022225590A1 (en) * | 2021-04-22 | 2022-10-27 | Western Digital Technologies, Inc. | Cache based flow for a simple copy command |
US11558426B2 (en) | 2020-07-29 | 2023-01-17 | Vmware, Inc. | Connection tracking for container cluster |
US11570090B2 (en) | 2020-07-29 | 2023-01-31 | Vmware, Inc. | Flow tracing operation in container cluster |
US11606294B2 (en) | 2020-07-16 | 2023-03-14 | Vmware, Inc. | Host computer configured to facilitate distributed SNAT service |
US11611613B2 (en) | 2020-07-24 | 2023-03-21 | Vmware, Inc. | Policy-based forwarding to a load balancer of a load balancing cluster |
US11616755B2 (en) | 2020-07-16 | 2023-03-28 | Vmware, Inc. | Facilitating distributed SNAT service |
US11677645B2 (en) | 2021-09-17 | 2023-06-13 | Vmware, Inc. | Traffic monitoring |
US11677588B2 (en) | 2010-07-06 | 2023-06-13 | Nicira, Inc. | Network control apparatus and method for creating and modifying logical switching elements |
US11687210B2 (en) | 2021-07-05 | 2023-06-27 | Vmware, Inc. | Criteria-based expansion of group nodes in a network topology visualization |
US11711278B2 (en) | 2021-07-24 | 2023-07-25 | Vmware, Inc. | Visualization of flow trace operation across multiple sites |
US11736436B2 (en) | 2020-12-31 | 2023-08-22 | Vmware, Inc. | Identifying routes with indirect addressing in a datacenter |
US11733920B2 (en) | 2020-09-10 | 2023-08-22 | Western Digital Technologies, Inc. | NVMe simple copy command support using dummy virtual function |
US11902050B2 (en) | 2020-07-28 | 2024-02-13 | VMware LLC | Method for providing distributed gateway service at host computer |
US11924080B2 (en) | 2020-01-17 | 2024-03-05 | VMware LLC | Practical overlay network latency measurement in datacenter |
Families Citing this family (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7334046B1 (en) | 2002-08-05 | 2008-02-19 | Qlogic, Corporation | System and method for optimizing frame routing in a network |
US7397768B1 (en) | 2002-09-11 | 2008-07-08 | Qlogic, Corporation | Zone management in a multi-module fibre channel switch |
JP2005018159A (en) * | 2003-06-23 | 2005-01-20 | Fujitsu Ltd | Storage system construction support device, storage system construction support method and storage system construction support program |
US7792115B2 (en) | 2003-07-21 | 2010-09-07 | Qlogic, Corporation | Method and system for routing and filtering network data packets in fibre channel systems |
US7684401B2 (en) | 2003-07-21 | 2010-03-23 | Qlogic, Corporation | Method and system for using extended fabric features with fibre channel switch elements |
US7646767B2 (en) | 2003-07-21 | 2010-01-12 | Qlogic, Corporation | Method and system for programmable data dependant network routing |
US7894348B2 (en) | 2003-07-21 | 2011-02-22 | Qlogic, Corporation | Method and system for congestion control in a fibre channel switch |
US7352701B1 (en) | 2003-09-19 | 2008-04-01 | Qlogic, Corporation | Buffer to buffer credit recovery for in-line fibre channel credit extension devices |
US7966294B1 (en) * | 2004-01-08 | 2011-06-21 | Netapp, Inc. | User interface system for a clustered storage system |
US20100005531A1 (en) * | 2004-12-23 | 2010-01-07 | Kenneth Largman | Isolated multiplexed multi-dimensional processing in a virtual processing space having virus, spyware, and hacker protection features |
US7340167B2 (en) | 2004-04-23 | 2008-03-04 | Qlogic, Corporation | Fibre channel transparent switch for mixed switch fabrics |
US7930377B2 (en) | 2004-04-23 | 2011-04-19 | Qlogic, Corporation | Method and system for using boot servers in networks |
US7404020B2 (en) | 2004-07-20 | 2008-07-22 | Qlogic, Corporation | Integrated fibre channel fabric controller |
US7411958B2 (en) * | 2004-10-01 | 2008-08-12 | Qlogic, Corporation | Method and system for transferring data directly between storage devices in a storage area network |
US8295299B2 (en) | 2004-10-01 | 2012-10-23 | Qlogic, Corporation | High speed fibre channel switch element |
CN100342352C (en) * | 2005-03-14 | 2007-10-10 | 北京邦诺存储科技有限公司 | Expandable high speed storage network buffer system |
US20060248194A1 (en) * | 2005-03-18 | 2006-11-02 | Riverbed Technology, Inc. | Connection forwarding |
CN101228523B (en) | 2005-04-25 | 2012-06-06 | 网络装置公司 | System and method for caching network file systems |
US7500134B2 (en) * | 2005-12-27 | 2009-03-03 | Emc Corporation | Virtual array failover |
US7685395B1 (en) | 2005-12-27 | 2010-03-23 | Emc Corporation | Spanning virtual arrays across multiple physical storage arrays |
US9348530B2 (en) * | 2005-12-27 | 2016-05-24 | Emc Corporation | Presentation of virtual arrays using n-port ID virtualization |
US7697554B1 (en) | 2005-12-27 | 2010-04-13 | Emc Corporation | On-line data migration of a logical/virtual storage array by replacing virtual names |
US7697515B2 (en) * | 2005-12-27 | 2010-04-13 | Emc Corporation | On-line data migration of a logical/virtual storage array |
JP4929808B2 (en) * | 2006-04-13 | 2012-05-09 | 富士通株式会社 | Network device connection apparatus and network device connection method |
US8533408B1 (en) | 2006-06-29 | 2013-09-10 | Emc Corporation | Consolidating N-storage arrays into one storage array using virtual array non-disruptive data migration |
US8539177B1 (en) | 2006-06-29 | 2013-09-17 | Emc Corporation | Partitioning of a storage array into N-storage arrays using virtual array non-disruptive data migration |
US8583861B1 (en) | 2006-06-29 | 2013-11-12 | Emc Corporation | Presentation of management functionality of virtual arrays |
US7757059B1 (en) | 2006-06-29 | 2010-07-13 | Emc Corporation | Virtual array non-disruptive management data migration |
US8452928B1 (en) | 2006-06-29 | 2013-05-28 | Emc Corporation | Virtual array non-disruptive migration of extended storage functionality |
JP2008158733A (en) * | 2006-12-22 | 2008-07-10 | Kddi Corp | Cache control method in storage area network, switch device, and program |
US9063896B1 (en) | 2007-06-29 | 2015-06-23 | Emc Corporation | System and method of non-disruptive data migration between virtual arrays of heterogeneous storage arrays |
US9098211B1 (en) | 2007-06-29 | 2015-08-04 | Emc Corporation | System and method of non-disruptive data migration between a full storage array and one or more virtual arrays |
US7836332B2 (en) * | 2007-07-18 | 2010-11-16 | Hitachi, Ltd. | Method and apparatus for managing virtual ports on storage systems |
US7970903B2 (en) * | 2007-08-20 | 2011-06-28 | Hitachi, Ltd. | Storage and server provisioning for virtualized and geographically dispersed data centers |
JP5164628B2 (en) * | 2008-03-24 | 2013-03-21 | 株式会社日立製作所 | Network switch device, server system, and server transfer method in server system |
JP4576449B2 (en) * | 2008-08-29 | 2010-11-10 | 富士通株式会社 | Switch device and copy control method |
JP5505380B2 (en) * | 2011-07-11 | 2014-05-28 | 富士通株式会社 | Relay device and relay method |
US9426060B2 (en) | 2013-08-07 | 2016-08-23 | International Business Machines Corporation | Software defined network (SDN) switch clusters having layer-3 distributed router functionality |
US20150098475A1 (en) * | 2013-10-09 | 2015-04-09 | International Business Machines Corporation | Host table management in software defined network (sdn) switch clusters having layer-3 distributed router functionality |
US9286238B1 (en) * | 2013-12-31 | 2016-03-15 | Emc Corporation | System, apparatus, and method of cache management |
US10394573B2 (en) * | 2015-04-07 | 2019-08-27 | Avago Technologies International Sales Pte. Limited | Host bus adapter with built-in storage for local boot-up |
US10873644B1 (en) * | 2019-06-21 | 2020-12-22 | Microsoft Technology Licensing, Llc | Web application wrapper |
CN111488382A (en) * | 2020-04-16 | 2020-08-04 | 北京思特奇信息技术股份有限公司 | Data calling method and system and electronic equipment |
Citations (39)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5511177A (en) * | 1991-11-21 | 1996-04-23 | Hitachi, Ltd. | File data multiplexing method and data processing system |
US5987506A (en) * | 1996-11-22 | 1999-11-16 | Mangosoft Corporation | Remote access and geographically distributed computers in a globally addressable storage environment |
US6112239A (en) * | 1997-06-18 | 2000-08-29 | Intervu, Inc | System and method for server-side optimization of data delivery on a distributed computer network |
US20020042866A1 (en) * | 2000-10-11 | 2002-04-11 | Robert Grant | Method and circuit for replicating data in a fibre channel network, or the like |
US20020091898A1 (en) * | 1998-12-22 | 2002-07-11 | Hitachi, Ltd. | Disk storage system |
US20020103889A1 (en) * | 2000-02-11 | 2002-08-01 | Thomas Markson | Virtual storage layer approach for dynamically associating computer storage with processing hosts |
US20020112113A1 (en) * | 2001-01-11 | 2002-08-15 | Yotta Yotta, Inc. | Storage virtualization system and methods |
US6449688B1 (en) * | 1997-12-24 | 2002-09-10 | Avid Technology, Inc. | Computer system and process for transferring streams of data between multiple storage units and multiple applications in a scalable and reliable manner |
US20020144058A1 (en) * | 2001-02-05 | 2002-10-03 | Burger Eric William | Network-based disk redundancy storage system and method |
US20020143999A1 (en) * | 2001-03-30 | 2002-10-03 | Kenji Yamagami | Path selection methods for storage based remote copy |
US20020156887A1 (en) * | 2001-04-18 | 2002-10-24 | Hitachi, Ltd. | Storage network switch |
US6477618B2 (en) * | 2000-12-28 | 2002-11-05 | Emc Corporation | Data storage system cluster architecture |
US6493750B1 (en) * | 1998-10-30 | 2002-12-10 | Agilent Technologies, Inc. | Command forwarding: a method for optimizing I/O latency and throughput in fibre channel client/server/target mass storage architectures |
US20030005119A1 (en) * | 2001-06-28 | 2003-01-02 | Intersan, Inc., A Delaware Corporation | Automated creation of application data paths in storage area networks |
US20030074403A1 (en) * | 2001-07-06 | 2003-04-17 | Harrow Ivan P. | Methods and apparatus for peer-to-peer services |
US20030093541A1 (en) * | 2001-09-28 | 2003-05-15 | Lolayekar Santosh C. | Protocol translation in a storage system |
US20030120751A1 (en) * | 2001-11-21 | 2003-06-26 | Husain Syed Mohammad Amir | System and method for providing virtual network attached storage using excess distributed storage capacity |
US20030158966A1 (en) * | 2002-02-19 | 2003-08-21 | Hitachi, Ltd. | Disk device and disk access route mapping |
US20030189936A1 (en) * | 2001-10-18 | 2003-10-09 | Terrell William C. | Router with routing processors and methods for virtualization |
US20030204597A1 (en) * | 2002-04-26 | 2003-10-30 | Hitachi, Inc. | Storage system having virtualized resource |
US6654795B1 (en) * | 2000-02-25 | 2003-11-25 | Brantley W. Coile | System and method for distribution of network file accesses over network storage devices |
US20030236851A1 (en) * | 2002-03-29 | 2003-12-25 | Cuddihy David J. | Method and system for improving the efficiency and ensuring the integrity of a data transfer |
US6691198B1 (en) * | 2000-03-30 | 2004-02-10 | Western Digital Ventures, Inc. | Automatically transmitting scheduling data from a plurality of storage systems to a network switch for scheduling access to the plurality of storage systems |
US20040073677A1 (en) * | 2000-06-29 | 2004-04-15 | Hitachi, Ltd, | Computer system using a storage area network and method of handling data in the computer system |
US20040078599A1 (en) * | 2001-03-01 | 2004-04-22 | Storeage Networking Technologies | Storage area network (san) security |
US20040078466A1 (en) * | 2002-10-17 | 2004-04-22 | Coates Joshua L. | Methods and apparatus for load balancing storage nodes in a distributed network attached storage system |
US20040088574A1 (en) * | 2002-10-31 | 2004-05-06 | Brocade Communications Systems, Inc. | Method and apparatus for encryption or compression devices inside a storage area network fabric |
US6742020B1 (en) * | 2000-06-08 | 2004-05-25 | Hewlett-Packard Development Company, L.P. | System and method for managing data flow and measuring service in a storage network |
US20040103261A1 (en) * | 2002-11-25 | 2004-05-27 | Hitachi, Ltd. | Virtualization controller and data transfer control method |
US20040111485A1 (en) * | 2002-12-09 | 2004-06-10 | Yasuyuki Mimatsu | Connecting device of storage device and computer system including the same connecting device |
US20040128456A1 (en) * | 2002-12-26 | 2004-07-01 | Hitachi, Ltd. | Storage system and data backup method for the same |
US6772365B1 (en) * | 1999-09-07 | 2004-08-03 | Hitachi, Ltd. | Data backup method of using storage area network |
US20040233910A1 (en) * | 2001-02-23 | 2004-11-25 | Wen-Shyen Chen | Storage area network using a data communication protocol |
US20050004979A1 (en) * | 2002-02-07 | 2005-01-06 | Microsoft Corporation | Method and system for transporting data content on a storage area network |
US6876656B2 (en) * | 2001-06-15 | 2005-04-05 | Broadcom Corporation | Switch assisted frame aliasing for storage virtualization |
US6947981B2 (en) * | 2002-03-26 | 2005-09-20 | Hewlett-Packard Development Company, L.P. | Flexible data replication mechanism |
US6957303B2 (en) * | 2002-11-26 | 2005-10-18 | Hitachi, Ltd. | System and managing method for cluster-type storage |
US6977927B1 (en) * | 2000-09-18 | 2005-12-20 | Hewlett-Packard Development Company, L.P. | Method and system of allocating storage resources in a storage area network |
US6996668B2 (en) * | 2001-08-06 | 2006-02-07 | Seagate Technology Llc | Synchronized mirrored data in a data storage device |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6779119B1 (en) * | 1999-06-30 | 2004-08-17 | Koninklijke Philips Electronics N.V. | Actual and perceived response time, user interface, and security via usage patterns |
US6643795B1 (en) * | 2000-03-30 | 2003-11-04 | Hewlett-Packard Development Company, L.P. | Controller-based bi-directional remote copy system with storage site failover capability |
US6766430B2 (en) * | 2000-07-06 | 2004-07-20 | Hitachi, Ltd. | Data reallocation among storage systems |
JP2002132455A (en) | 2000-10-25 | 2002-05-10 | Hitachi Ltd | Cache manager and computer system including it |
US6985956B2 (en) | 2000-11-02 | 2006-01-10 | Sun Microsystems, Inc. | Switching system |
US7499877B2 (en) | 2001-02-21 | 2009-03-03 | American Management Systems | Method and apparatus for dynamically maintaining and executing data definitions and/or business rules for an electronic procurement system |
EP1595363B1 (en) * | 2001-08-15 | 2016-07-13 | The Board of Governors for Higher Education State of Rhode Island and Providence Plantations | Scsi-to-ip cache storage device and method |
US6976134B1 (en) * | 2001-09-28 | 2005-12-13 | Emc Corporation | Pooling and provisioning storage resources in a storage network |
-
2003
- 2003-06-18 JP JP2003172770A patent/JP4278445B2/en not_active Expired - Fee Related
- 2003-08-14 EP EP20030018489 patent/EP1489524B1/en not_active Expired - Fee Related
- 2003-08-22 US US10/646,036 patent/US7124169B2/en not_active Expired - Lifetime
-
2006
- 2006-04-18 US US11/407,167 patent/US20060187908A1/en not_active Abandoned
Patent Citations (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5511177A (en) * | 1991-11-21 | 1996-04-23 | Hitachi, Ltd. | File data multiplexing method and data processing system |
US5987506A (en) * | 1996-11-22 | 1999-11-16 | Mangosoft Corporation | Remote access and geographically distributed computers in a globally addressable storage environment |
US6112239A (en) * | 1997-06-18 | 2000-08-29 | Intervu, Inc | System and method for server-side optimization of data delivery on a distributed computer network |
US6449688B1 (en) * | 1997-12-24 | 2002-09-10 | Avid Technology, Inc. | Computer system and process for transferring streams of data between multiple storage units and multiple applications in a scalable and reliable manner |
US6493750B1 (en) * | 1998-10-30 | 2002-12-10 | Agilent Technologies, Inc. | Command forwarding: a method for optimizing I/O latency and throughput in fibre channel client/server/target mass storage architectures |
US20040158673A1 (en) * | 1998-12-22 | 2004-08-12 | Hitachi, Ltd. | Disk storage system including a switch |
US6701411B2 (en) * | 1998-12-22 | 2004-03-02 | Hitachi, Ltd. | Switch and storage system for sending an access request from a host to a storage subsystem |
US20020091898A1 (en) * | 1998-12-22 | 2002-07-11 | Hitachi, Ltd. | Disk storage system |
US6772365B1 (en) * | 1999-09-07 | 2004-08-03 | Hitachi, Ltd. | Data backup method of using storage area network |
US20020103889A1 (en) * | 2000-02-11 | 2002-08-01 | Thomas Markson | Virtual storage layer approach for dynamically associating computer storage with processing hosts |
US6654795B1 (en) * | 2000-02-25 | 2003-11-25 | Brantley W. Coile | System and method for distribution of network file accesses over network storage devices |
US6691198B1 (en) * | 2000-03-30 | 2004-02-10 | Western Digital Ventures, Inc. | Automatically transmitting scheduling data from a plurality of storage systems to a network switch for scheduling access to the plurality of storage systems |
US6742020B1 (en) * | 2000-06-08 | 2004-05-25 | Hewlett-Packard Development Company, L.P. | System and method for managing data flow and measuring service in a storage network |
US20040073677A1 (en) * | 2000-06-29 | 2004-04-15 | Hitachi, Ltd, | Computer system using a storage area network and method of handling data in the computer system |
US6977927B1 (en) * | 2000-09-18 | 2005-12-20 | Hewlett-Packard Development Company, L.P. | Method and system of allocating storage resources in a storage area network |
US20020042866A1 (en) * | 2000-10-11 | 2002-04-11 | Robert Grant | Method and circuit for replicating data in a fibre channel network, or the like |
US6477618B2 (en) * | 2000-12-28 | 2002-11-05 | Emc Corporation | Data storage system cluster architecture |
US20020112113A1 (en) * | 2001-01-11 | 2002-08-15 | Yotta Yotta, Inc. | Storage virtualization system and methods |
US20020144058A1 (en) * | 2001-02-05 | 2002-10-03 | Burger Eric William | Network-based disk redundancy storage system and method |
US20040233910A1 (en) * | 2001-02-23 | 2004-11-25 | Wen-Shyen Chen | Storage area network using a data communication protocol |
US20040078599A1 (en) * | 2001-03-01 | 2004-04-22 | Storeage Networking Technologies | Storage area network (san) security |
US20020143999A1 (en) * | 2001-03-30 | 2002-10-03 | Kenji Yamagami | Path selection methods for storage based remote copy |
US20020156887A1 (en) * | 2001-04-18 | 2002-10-24 | Hitachi, Ltd. | Storage network switch |
US6876656B2 (en) * | 2001-06-15 | 2005-04-05 | Broadcom Corporation | Switch assisted frame aliasing for storage virtualization |
US7343410B2 (en) * | 2001-06-28 | 2008-03-11 | Finisar Corporation | Automated creation of application data paths in storage area networks |
US20030005119A1 (en) * | 2001-06-28 | 2003-01-02 | Intersan, Inc., A Delaware Corporation | Automated creation of application data paths in storage area networks |
US20030074403A1 (en) * | 2001-07-06 | 2003-04-17 | Harrow Ivan P. | Methods and apparatus for peer-to-peer services |
US6996668B2 (en) * | 2001-08-06 | 2006-02-07 | Seagate Technology Llc | Synchronized mirrored data in a data storage device |
US20030093541A1 (en) * | 2001-09-28 | 2003-05-15 | Lolayekar Santosh C. | Protocol translation in a storage system |
US20030189936A1 (en) * | 2001-10-18 | 2003-10-09 | Terrell William C. | Router with routing processors and methods for virtualization |
US20030120751A1 (en) * | 2001-11-21 | 2003-06-26 | Husain Syed Mohammad Amir | System and method for providing virtual network attached storage using excess distributed storage capacity |
US20050004979A1 (en) * | 2002-02-07 | 2005-01-06 | Microsoft Corporation | Method and system for transporting data content on a storage area network |
US20030158966A1 (en) * | 2002-02-19 | 2003-08-21 | Hitachi, Ltd. | Disk device and disk access route mapping |
US6947981B2 (en) * | 2002-03-26 | 2005-09-20 | Hewlett-Packard Development Company, L.P. | Flexible data replication mechanism |
US20030236851A1 (en) * | 2002-03-29 | 2003-12-25 | Cuddihy David J. | Method and system for improving the efficiency and ensuring the integrity of a data transfer |
US20030204597A1 (en) * | 2002-04-26 | 2003-10-30 | Hitachi, Inc. | Storage system having virtualized resource |
US20040088297A1 (en) * | 2002-10-17 | 2004-05-06 | Coates Joshua L. | Distributed network attached storage system |
US20040078466A1 (en) * | 2002-10-17 | 2004-04-22 | Coates Joshua L. | Methods and apparatus for load balancing storage nodes in a distributed network attached storage system |
US20040088574A1 (en) * | 2002-10-31 | 2004-05-06 | Brocade Communications Systems, Inc. | Method and apparatus for encryption or compression devices inside a storage area network fabric |
US20040103261A1 (en) * | 2002-11-25 | 2004-05-27 | Hitachi, Ltd. | Virtualization controller and data transfer control method |
US6957303B2 (en) * | 2002-11-26 | 2005-10-18 | Hitachi, Ltd. | System and managing method for cluster-type storage |
US20040111485A1 (en) * | 2002-12-09 | 2004-06-10 | Yasuyuki Mimatsu | Connecting device of storage device and computer system including the same connecting device |
US20040128456A1 (en) * | 2002-12-26 | 2004-07-01 | Hitachi, Ltd. | Storage system and data backup method for the same |
Cited By (192)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8015266B1 (en) * | 2003-02-07 | 2011-09-06 | Netapp, Inc. | System and method for providing persistent node names |
US7664917B2 (en) | 2005-03-24 | 2010-02-16 | Fujitsu Limited | Device and method for caching control, and computer product |
US20060218349A1 (en) * | 2005-03-24 | 2006-09-28 | Fujitsu Limited | Device and method for caching control, and computer product |
US9900410B2 (en) | 2006-05-01 | 2018-02-20 | Nicira, Inc. | Private ethernet overlay networks over a shared ethernet in a virtual environment |
US20090138577A1 (en) * | 2007-09-26 | 2009-05-28 | Nicira Networks | Network operating system for managing and securing networks |
US9876672B2 (en) | 2007-09-26 | 2018-01-23 | Nicira, Inc. | Network operating system for managing and securing networks |
US10749736B2 (en) | 2007-09-26 | 2020-08-18 | Nicira, Inc. | Network operating system for managing and securing networks |
US9083609B2 (en) | 2007-09-26 | 2015-07-14 | Nicira, Inc. | Network operating system for managing and securing networks |
US11683214B2 (en) | 2007-09-26 | 2023-06-20 | Nicira, Inc. | Network operating system for managing and securing networks |
US11190463B2 (en) | 2008-05-23 | 2021-11-30 | Vmware, Inc. | Distributed virtual switch for virtualized computer systems |
US11757797B2 (en) | 2008-05-23 | 2023-09-12 | Vmware, Inc. | Distributed virtual switch for virtualized computer systems |
US8954627B2 (en) | 2008-06-09 | 2015-02-10 | International Business Machines Corporation | Fibre channel N-port ID virtualization protocol |
US8341308B2 (en) | 2008-06-09 | 2012-12-25 | International Business Machines Corporation | Method and apparatus for a fibre channel N-port ID virtualization protocol |
US9952892B2 (en) | 2009-07-27 | 2018-04-24 | Nicira, Inc. | Automated network configuration of virtual machines in a virtual lab environment |
US10949246B2 (en) | 2009-07-27 | 2021-03-16 | Vmware, Inc. | Automated network configuration of virtual machines in a virtual lab environment |
US9697032B2 (en) | 2009-07-27 | 2017-07-04 | Vmware, Inc. | Automated network configuration of virtual machines in a virtual lab environment |
US9306910B2 (en) | 2009-07-27 | 2016-04-05 | Vmware, Inc. | Private allocated networks over shared communications infrastructure |
US11533389B2 (en) | 2009-09-30 | 2022-12-20 | Nicira, Inc. | Private allocated networks over shared communications infrastructure |
US9888097B2 (en) | 2009-09-30 | 2018-02-06 | Nicira, Inc. | Private allocated networks over shared communications infrastructure |
US10757234B2 (en) | 2009-09-30 | 2020-08-25 | Nicira, Inc. | Private allocated networks over shared communications infrastructure |
US11917044B2 (en) | 2009-09-30 | 2024-02-27 | Nicira, Inc. | Private allocated networks over shared communications infrastructure |
US10291753B2 (en) | 2009-09-30 | 2019-05-14 | Nicira, Inc. | Private allocated networks over shared communications infrastructure |
US11838395B2 (en) | 2010-06-21 | 2023-12-05 | Nicira, Inc. | Private ethernet overlay networks over a shared ethernet in a virtual environment |
US10951744B2 (en) | 2010-06-21 | 2021-03-16 | Nicira, Inc. | Private ethernet overlay networks over a shared ethernet in a virtual environment |
US9008087B2 (en) | 2010-07-06 | 2015-04-14 | Nicira, Inc. | Processing requests in a network control system with multiple controller instances |
US8966040B2 (en) | 2010-07-06 | 2015-02-24 | Nicira, Inc. | Use of network information base structure to establish communication between applications |
US10326660B2 (en) | 2010-07-06 | 2019-06-18 | Nicira, Inc. | Network virtualization apparatus and method |
US8743889B2 (en) | 2010-07-06 | 2014-06-03 | Nicira, Inc. | Method and apparatus for using a network information base to control a plurality of shared network infrastructure switching elements |
US11876679B2 (en) | 2010-07-06 | 2024-01-16 | Nicira, Inc. | Method and apparatus for interacting with a network information base in a distributed network control system with multiple controller instances |
US11539591B2 (en) | 2010-07-06 | 2022-12-27 | Nicira, Inc. | Distributed network control system with one master controller per logical datapath set |
US8775594B2 (en) | 2010-07-06 | 2014-07-08 | Nicira, Inc. | Distributed network control system with a distributed hash table |
US11223531B2 (en) | 2010-07-06 | 2022-01-11 | Nicira, Inc. | Method and apparatus for interacting with a network information base in a distributed network control system with multiple controller instances |
US8842679B2 (en) | 2010-07-06 | 2014-09-23 | Nicira, Inc. | Control system that elects a master controller instance for switching elements |
US9391928B2 (en) | 2010-07-06 | 2016-07-12 | Nicira, Inc. | Method and apparatus for interacting with a network information base in a distributed network control system with multiple controller instances |
US11509564B2 (en) | 2010-07-06 | 2022-11-22 | Nicira, Inc. | Method and apparatus for replicating network information base in a distributed network control system with multiple controller instances |
US8880468B2 (en) * | 2010-07-06 | 2014-11-04 | Nicira, Inc. | Secondary storage architecture for a network control system that utilizes a primary network information base |
US9172663B2 (en) | 2010-07-06 | 2015-10-27 | Nicira, Inc. | Method and apparatus for replicating network information base in a distributed network control system with multiple controller instances |
US11677588B2 (en) | 2010-07-06 | 2023-06-13 | Nicira, Inc. | Network control apparatus and method for creating and modifying logical switching elements |
US9106587B2 (en) | 2010-07-06 | 2015-08-11 | Nicira, Inc. | Distributed network control system with one master controller per managed switching element |
US10033640B2 (en) | 2013-07-08 | 2018-07-24 | Nicira, Inc. | Hybrid packet processing |
US9571386B2 (en) | 2013-07-08 | 2017-02-14 | Nicira, Inc. | Hybrid packet processing |
US10680948B2 (en) | 2013-07-08 | 2020-06-09 | Nicira, Inc. | Hybrid packet processing |
US10778557B2 (en) | 2013-07-12 | 2020-09-15 | Nicira, Inc. | Tracing network packets through logical and physical networks |
US11201808B2 (en) | 2013-07-12 | 2021-12-14 | Nicira, Inc. | Tracing logical network packets through physical network |
US10181993B2 (en) | 2013-07-12 | 2019-01-15 | Nicira, Inc. | Tracing network packets through logical and physical networks |
US9407580B2 (en) | 2013-07-12 | 2016-08-02 | Nicira, Inc. | Maintaining data stored with a packet |
US9887960B2 (en) | 2013-08-14 | 2018-02-06 | Nicira, Inc. | Providing services for logical networks |
US10764238B2 (en) | 2013-08-14 | 2020-09-01 | Nicira, Inc. | Providing services for logical networks |
US9952885B2 (en) | 2013-08-14 | 2018-04-24 | Nicira, Inc. | Generation of configuration files for a DHCP module executing within a virtualized container |
US11695730B2 (en) | 2013-08-14 | 2023-07-04 | Nicira, Inc. | Providing services for logical networks |
US9503371B2 (en) | 2013-09-04 | 2016-11-22 | Nicira, Inc. | High availability L3 gateways for logical networks |
US10003534B2 (en) | 2013-09-04 | 2018-06-19 | Nicira, Inc. | Multiple active L3 gateways for logical networks |
US9577845B2 (en) | 2013-09-04 | 2017-02-21 | Nicira, Inc. | Multiple active L3 gateways for logical networks |
US10389634B2 (en) | 2013-09-04 | 2019-08-20 | Nicira, Inc. | Multiple active L3 gateways for logical networks |
US9602398B2 (en) | 2013-09-15 | 2017-03-21 | Nicira, Inc. | Dynamically generating flows with wildcard fields |
US10382324B2 (en) | 2013-09-15 | 2019-08-13 | Nicira, Inc. | Dynamically generating flows with wildcard fields |
US10498638B2 (en) | 2013-09-15 | 2019-12-03 | Nicira, Inc. | Performing a multi-stage lookup to classify packets |
US9575782B2 (en) | 2013-10-13 | 2017-02-21 | Nicira, Inc. | ARP for logical router |
US11029982B2 (en) | 2013-10-13 | 2021-06-08 | Nicira, Inc. | Configuration of logical router |
US10528373B2 (en) | 2013-10-13 | 2020-01-07 | Nicira, Inc. | Configuration of logical router |
US9977685B2 (en) | 2013-10-13 | 2018-05-22 | Nicira, Inc. | Configuration of logical router |
US9910686B2 (en) | 2013-10-13 | 2018-03-06 | Nicira, Inc. | Bridging between network segments with a logical router |
US10693763B2 (en) | 2013-10-13 | 2020-06-23 | Nicira, Inc. | Asymmetric connection with external networks |
US10063458B2 (en) | 2013-10-13 | 2018-08-28 | Nicira, Inc. | Asymmetric connection with external networks |
US9785455B2 (en) | 2013-10-13 | 2017-10-10 | Nicira, Inc. | Logical router |
US9838276B2 (en) | 2013-12-09 | 2017-12-05 | Nicira, Inc. | Detecting an elephant flow based on the size of a packet |
US11539630B2 (en) | 2013-12-09 | 2022-12-27 | Nicira, Inc. | Inspecting operations of a machine to detect elephant flows |
US10193771B2 (en) | 2013-12-09 | 2019-01-29 | Nicira, Inc. | Detecting and handling elephant flows |
US10158538B2 (en) | 2013-12-09 | 2018-12-18 | Nicira, Inc. | Reporting elephant flows to a network controller |
US9548924B2 (en) | 2013-12-09 | 2017-01-17 | Nicira, Inc. | Detecting an elephant flow based on the size of a packet |
US10666530B2 (en) | 2013-12-09 | 2020-05-26 | Nicira, Inc | Detecting and handling large flows |
US11811669B2 (en) | 2013-12-09 | 2023-11-07 | Nicira, Inc. | Inspecting operations of a machine to detect elephant flows |
US9967199B2 (en) | 2013-12-09 | 2018-05-08 | Nicira, Inc. | Inspecting operations of a machine to detect elephant flows |
US11095536B2 (en) | 2013-12-09 | 2021-08-17 | Nicira, Inc. | Detecting and handling large flows |
US10380019B2 (en) | 2013-12-13 | 2019-08-13 | Nicira, Inc. | Dynamically adjusting the number of flows allowed in a flow table cache |
US9996467B2 (en) | 2013-12-13 | 2018-06-12 | Nicira, Inc. | Dynamically adjusting the number of flows allowed in a flow table cache |
US9569368B2 (en) | 2013-12-13 | 2017-02-14 | Nicira, Inc. | Installing and managing flows in a flow table cache |
US9590901B2 (en) | 2014-03-14 | 2017-03-07 | Nicira, Inc. | Route advertisement by managed gateways |
US10110431B2 (en) | 2014-03-14 | 2018-10-23 | Nicira, Inc. | Logical router processing by network controller |
US9225597B2 (en) | 2014-03-14 | 2015-12-29 | Nicira, Inc. | Managed gateways peering with external router to attract ingress packets |
US9419855B2 (en) | 2014-03-14 | 2016-08-16 | Nicira, Inc. | Static routes for logical routers |
US9313129B2 (en) | 2014-03-14 | 2016-04-12 | Nicira, Inc. | Logical router processing by network controller |
US10567283B2 (en) | 2014-03-14 | 2020-02-18 | Nicira, Inc. | Route advertisement by managed gateways |
US10164881B2 (en) | 2014-03-14 | 2018-12-25 | Nicira, Inc. | Route advertisement by managed gateways |
US11025543B2 (en) | 2014-03-14 | 2021-06-01 | Nicira, Inc. | Route advertisement by managed gateways |
US9503321B2 (en) | 2014-03-21 | 2016-11-22 | Nicira, Inc. | Dynamic routing for logical routers |
US11252024B2 (en) | 2014-03-21 | 2022-02-15 | Nicira, Inc. | Multiple levels of logical routers |
US9647883B2 (en) | 2014-03-21 | 2017-05-09 | Nicria, Inc. | Multiple levels of logical routers |
US10411955B2 (en) | 2014-03-21 | 2019-09-10 | Nicira, Inc. | Multiple levels of logical routers |
US9893988B2 (en) | 2014-03-27 | 2018-02-13 | Nicira, Inc. | Address resolution using multiple designated instances of a logical router |
US9413644B2 (en) | 2014-03-27 | 2016-08-09 | Nicira, Inc. | Ingress ECMP in virtual distributed routing environment |
US11190443B2 (en) | 2014-03-27 | 2021-11-30 | Nicira, Inc. | Address resolution using multiple designated instances of a logical router |
US11736394B2 (en) | 2014-03-27 | 2023-08-22 | Nicira, Inc. | Address resolution using multiple designated instances of a logical router |
US9385954B2 (en) | 2014-03-31 | 2016-07-05 | Nicira, Inc. | Hashing techniques for use in a network environment |
US11431639B2 (en) | 2014-03-31 | 2022-08-30 | Nicira, Inc. | Caching of service decisions |
US10659373B2 (en) | 2014-03-31 | 2020-05-19 | Nicira, Inc | Processing packets according to hierarchy of flow entry storages |
US10193806B2 (en) | 2014-03-31 | 2019-01-29 | Nicira, Inc. | Performing a finishing operation to improve the quality of a resulting hash |
US9742881B2 (en) | 2014-06-30 | 2017-08-22 | Nicira, Inc. | Network virtualization using just-in-time distributed capability for classification encoding |
US10511458B2 (en) | 2014-09-30 | 2019-12-17 | Nicira, Inc. | Virtual distributed bridging |
US11178051B2 (en) | 2014-09-30 | 2021-11-16 | Vmware, Inc. | Packet key parser for flow-based forwarding elements |
US11483175B2 (en) | 2014-09-30 | 2022-10-25 | Nicira, Inc. | Virtual distributed bridging |
US9768980B2 (en) | 2014-09-30 | 2017-09-19 | Nicira, Inc. | Virtual distributed bridging |
US10020960B2 (en) | 2014-09-30 | 2018-07-10 | Nicira, Inc. | Virtual distributed bridging |
US10250443B2 (en) | 2014-09-30 | 2019-04-02 | Nicira, Inc. | Using physical location to modify behavior of a distributed virtual network element |
US11252037B2 (en) | 2014-09-30 | 2022-02-15 | Nicira, Inc. | Using physical location to modify behavior of a distributed virtual network element |
US10469342B2 (en) | 2014-10-10 | 2019-11-05 | Nicira, Inc. | Logical network traffic analysis |
US11128550B2 (en) | 2014-10-10 | 2021-09-21 | Nicira, Inc. | Logical network traffic analysis |
US11799800B2 (en) | 2015-01-30 | 2023-10-24 | Nicira, Inc. | Logical router with multiple routing components |
US11283731B2 (en) | 2015-01-30 | 2022-03-22 | Nicira, Inc. | Logical router with multiple routing components |
US10700996B2 (en) | 2015-01-30 | 2020-06-30 | Nicira, Inc | Logical router with multiple routing components |
US10079779B2 (en) | 2015-01-30 | 2018-09-18 | Nicira, Inc. | Implementing logical router uplinks |
US10129180B2 (en) | 2015-01-30 | 2018-11-13 | Nicira, Inc. | Transit logical switch within logical router |
US10652143B2 (en) | 2015-04-04 | 2020-05-12 | Nicira, Inc | Route server mode for dynamic routing between logical and physical networks |
US10038628B2 (en) | 2015-04-04 | 2018-07-31 | Nicira, Inc. | Route server mode for dynamic routing between logical and physical networks |
US11601362B2 (en) | 2015-04-04 | 2023-03-07 | Nicira, Inc. | Route server mode for dynamic routing between logical and physical networks |
US10635604B2 (en) * | 2015-04-17 | 2020-04-28 | EMC IP Holding Company LLC | Extending a cache of a storage system |
US20160335199A1 (en) * | 2015-04-17 | 2016-11-17 | Emc Corporation | Extending a cache of a storage system |
US11799775B2 (en) | 2015-06-30 | 2023-10-24 | Nicira, Inc. | Intermediate logical interfaces in a virtual distributed router environment |
US10225184B2 (en) | 2015-06-30 | 2019-03-05 | Nicira, Inc. | Redirecting traffic in a virtual distributed router environment |
US10348625B2 (en) | 2015-06-30 | 2019-07-09 | Nicira, Inc. | Sharing common L2 segment in a virtual distributed router environment |
US11050666B2 (en) | 2015-06-30 | 2021-06-29 | Nicira, Inc. | Intermediate logical interfaces in a virtual distributed router environment |
US10693783B2 (en) | 2015-06-30 | 2020-06-23 | Nicira, Inc. | Intermediate logical interfaces in a virtual distributed router environment |
US10361952B2 (en) | 2015-06-30 | 2019-07-23 | Nicira, Inc. | Intermediate logical interfaces in a virtual distributed router environment |
US10129142B2 (en) | 2015-08-11 | 2018-11-13 | Nicira, Inc. | Route configuration for logical router |
US10805212B2 (en) | 2015-08-11 | 2020-10-13 | Nicira, Inc. | Static route configuration for logical router |
US10230629B2 (en) | 2015-08-11 | 2019-03-12 | Nicira, Inc. | Static route configuration for logical router |
US11533256B2 (en) | 2015-08-11 | 2022-12-20 | Nicira, Inc. | Static route configuration for logical router |
US10601700B2 (en) | 2015-08-31 | 2020-03-24 | Nicira, Inc. | Authorization for advertised routes among logical routers |
US10075363B2 (en) | 2015-08-31 | 2018-09-11 | Nicira, Inc. | Authorization for advertised routes among logical routers |
US11425021B2 (en) | 2015-08-31 | 2022-08-23 | Nicira, Inc. | Authorization for advertised routes among logical routers |
US10057157B2 (en) | 2015-08-31 | 2018-08-21 | Nicira, Inc. | Automatically advertising NAT routes between logical routers |
US10795716B2 (en) | 2015-10-31 | 2020-10-06 | Nicira, Inc. | Static route types for logical routers |
US11593145B2 (en) | 2015-10-31 | 2023-02-28 | Nicira, Inc. | Static route types for logical routers |
US10095535B2 (en) | 2015-10-31 | 2018-10-09 | Nicira, Inc. | Static route types for logical routers |
US11502958B2 (en) | 2016-04-28 | 2022-11-15 | Nicira, Inc. | Automatic configuration of logical routers on edge nodes |
US10805220B2 (en) | 2016-04-28 | 2020-10-13 | Nicira, Inc. | Automatic configuration of logical routers on edge nodes |
US10333849B2 (en) | 2016-04-28 | 2019-06-25 | Nicira, Inc. | Automatic configuration of logical routers on edge nodes |
US11855959B2 (en) | 2016-04-29 | 2023-12-26 | Nicira, Inc. | Implementing logical DHCP servers in logical networks |
US10484515B2 (en) | 2016-04-29 | 2019-11-19 | Nicira, Inc. | Implementing logical metadata proxy servers in logical networks |
US10841273B2 (en) | 2016-04-29 | 2020-11-17 | Nicira, Inc. | Implementing logical DHCP servers in logical networks |
US10091161B2 (en) | 2016-04-30 | 2018-10-02 | Nicira, Inc. | Assignment of router ID for logical routers |
US10560320B2 (en) | 2016-06-29 | 2020-02-11 | Nicira, Inc. | Ranking of gateways in cluster |
US10749801B2 (en) | 2016-06-29 | 2020-08-18 | Nicira, Inc. | Installation of routing tables for logical router in route server mode |
US10153973B2 (en) | 2016-06-29 | 2018-12-11 | Nicira, Inc. | Installation of routing tables for logical router in route server mode |
US11418445B2 (en) | 2016-06-29 | 2022-08-16 | Nicira, Inc. | Installation of routing tables for logical router in route server mode |
US10454758B2 (en) | 2016-08-31 | 2019-10-22 | Nicira, Inc. | Edge node cluster network redundancy and fast convergence using an underlay anycast VTEP IP |
US11539574B2 (en) | 2016-08-31 | 2022-12-27 | Nicira, Inc. | Edge node cluster network redundancy and fast convergence using an underlay anycast VTEP IP |
US10911360B2 (en) | 2016-09-30 | 2021-02-02 | Nicira, Inc. | Anycast edge service gateways |
US10341236B2 (en) | 2016-09-30 | 2019-07-02 | Nicira, Inc. | Anycast edge service gateways |
US10237123B2 (en) | 2016-12-21 | 2019-03-19 | Nicira, Inc. | Dynamic recovery from a split-brain failure in edge nodes |
US10645204B2 (en) | 2016-12-21 | 2020-05-05 | Nicira, Inc | Dynamic recovery from a split-brain failure in edge nodes |
US11665242B2 (en) | 2016-12-21 | 2023-05-30 | Nicira, Inc. | Bypassing a load balancer in a return path of network traffic |
US10742746B2 (en) | 2016-12-21 | 2020-08-11 | Nicira, Inc. | Bypassing a load balancer in a return path of network traffic |
US10212071B2 (en) | 2016-12-21 | 2019-02-19 | Nicira, Inc. | Bypassing a load balancer in a return path of network traffic |
US10616045B2 (en) | 2016-12-22 | 2020-04-07 | Nicira, Inc. | Migration of centralized routing components of logical router |
US11115262B2 (en) | 2016-12-22 | 2021-09-07 | Nicira, Inc. | Migration of centralized routing components of logical router |
US10805239B2 (en) | 2017-03-07 | 2020-10-13 | Nicira, Inc. | Visualization of path between logical network endpoints |
US11336590B2 (en) | 2017-03-07 | 2022-05-17 | Nicira, Inc. | Visualization of path between logical network endpoints |
US10200306B2 (en) | 2017-03-07 | 2019-02-05 | Nicira, Inc. | Visualization of packet tracing operation results |
US10637800B2 (en) | 2017-06-30 | 2020-04-28 | Nicira, Inc | Replacement of logical network addresses with physical network addresses |
US10681000B2 (en) | 2017-06-30 | 2020-06-09 | Nicira, Inc. | Assignment of unique physical network addresses for logical network addresses |
US11595345B2 (en) | 2017-06-30 | 2023-02-28 | Nicira, Inc. | Assignment of unique physical network addresses for logical network addresses |
US10608887B2 (en) | 2017-10-06 | 2020-03-31 | Nicira, Inc. | Using packet tracing tool to automatically execute packet capture operations |
US11336486B2 (en) | 2017-11-14 | 2022-05-17 | Nicira, Inc. | Selection of managed forwarding element for bridge spanning multiple datacenters |
US10374827B2 (en) | 2017-11-14 | 2019-08-06 | Nicira, Inc. | Identifier that maps to different networks at different datacenters |
US10511459B2 (en) | 2017-11-14 | 2019-12-17 | Nicira, Inc. | Selection of managed forwarding element for bridge spanning multiple datacenters |
US10931560B2 (en) | 2018-11-23 | 2021-02-23 | Vmware, Inc. | Using route type to determine routing protocol behavior |
US10797998B2 (en) | 2018-12-05 | 2020-10-06 | Vmware, Inc. | Route server for distributed routers using hierarchical routing protocol |
US10938788B2 (en) | 2018-12-12 | 2021-03-02 | Vmware, Inc. | Static routes for policy-based VPN |
CN109361714A (en) * | 2018-12-18 | 2019-02-19 | 中国移动通信集团江苏有限公司 | User logs in method for authenticating, device, equipment and computer storage medium |
US11159343B2 (en) | 2019-08-30 | 2021-10-26 | Vmware, Inc. | Configuring traffic optimization using distributed edge services |
US11095480B2 (en) | 2019-08-30 | 2021-08-17 | Vmware, Inc. | Traffic optimization using distributed edge services |
US11924080B2 (en) | 2020-01-17 | 2024-03-05 | VMware LLC | Practical overlay network latency measurement in datacenter |
US11616755B2 (en) | 2020-07-16 | 2023-03-28 | Vmware, Inc. | Facilitating distributed SNAT service |
US11606294B2 (en) | 2020-07-16 | 2023-03-14 | Vmware, Inc. | Host computer configured to facilitate distributed SNAT service |
US11611613B2 (en) | 2020-07-24 | 2023-03-21 | Vmware, Inc. | Policy-based forwarding to a load balancer of a load balancing cluster |
US11451413B2 (en) | 2020-07-28 | 2022-09-20 | Vmware, Inc. | Method for advertising availability of distributed gateway service and machines at host computer |
US11902050B2 (en) | 2020-07-28 | 2024-02-13 | VMware LLC | Method for providing distributed gateway service at host computer |
US11558426B2 (en) | 2020-07-29 | 2023-01-17 | Vmware, Inc. | Connection tracking for container cluster |
US11196628B1 (en) | 2020-07-29 | 2021-12-07 | Vmware, Inc. | Monitoring container clusters |
US11570090B2 (en) | 2020-07-29 | 2023-01-31 | Vmware, Inc. | Flow tracing operation in container cluster |
US11733920B2 (en) | 2020-09-10 | 2023-08-22 | Western Digital Technologies, Inc. | NVMe simple copy command support using dummy virtual function |
US11736436B2 (en) | 2020-12-31 | 2023-08-22 | Vmware, Inc. | Identifying routes with indirect addressing in a datacenter |
US11848825B2 (en) | 2021-01-08 | 2023-12-19 | Vmware, Inc. | Network visualization of correlations between logical elements and associated physical elements |
US11336533B1 (en) | 2021-01-08 | 2022-05-17 | Vmware, Inc. | Network visualization of correlations between logical elements and associated physical elements |
US11556268B2 (en) | 2021-04-22 | 2023-01-17 | Western Digital Technologies, Inc. | Cache based flow for a simple copy command |
WO2022225590A1 (en) * | 2021-04-22 | 2022-10-27 | Western Digital Technologies, Inc. | Cache based flow for a simple copy command |
US11687210B2 (en) | 2021-07-05 | 2023-06-27 | Vmware, Inc. | Criteria-based expansion of group nodes in a network topology visualization |
US11711278B2 (en) | 2021-07-24 | 2023-07-25 | Vmware, Inc. | Visualization of flow trace operation across multiple sites |
US11855862B2 (en) | 2021-09-17 | 2023-12-26 | Vmware, Inc. | Tagging packets for monitoring and analysis |
US11677645B2 (en) | 2021-09-17 | 2023-06-13 | Vmware, Inc. | Traffic monitoring |
US11706109B2 (en) | 2021-09-17 | 2023-07-18 | Vmware, Inc. | Performance of traffic monitoring actions |
Also Published As
Publication number | Publication date |
---|---|
EP1489524B1 (en) | 2012-11-14 |
JP4278445B2 (en) | 2009-06-17 |
US7124169B2 (en) | 2006-10-17 |
US20050008016A1 (en) | 2005-01-13 |
EP1489524A1 (en) | 2004-12-22 |
JP2005010969A (en) | 2005-01-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7124169B2 (en) | Network system and its switches | |
US9811463B2 (en) | Apparatus including an I/O interface and a network interface and related method of use | |
US8949557B2 (en) | File management method and hierarchy management file system | |
JP4859471B2 (en) | Storage system and storage controller | |
US20190278719A1 (en) | Primary Data Storage System with Data Tiering | |
EP1595363B1 (en) | Scsi-to-ip cache storage device and method | |
US7953926B2 (en) | SCSI-to-IP cache storage device and method | |
JP4297747B2 (en) | Storage device | |
JP5205132B2 (en) | Method and apparatus for NAS / CAS unified storage system | |
JP5059974B2 (en) | Cluster shared volume | |
US11640356B2 (en) | Methods for managing storage operations for multiple hosts coupled to dual-port solid-state disks and devices thereof | |
JP2007510978A (en) | Storage server bottom-up cache structure | |
JP2007087059A (en) | Storage control system | |
US11327653B2 (en) | Drive box, storage system and data transfer method | |
US6549988B1 (en) | Data storage system comprising a network of PCs and method using same | |
JP2004334481A (en) | Virtualized information management apparatus | |
JP2003044421A (en) | Virtual storage system and switching node used for the same system | |
JP2007072521A (en) | Storage control system and storage controller | |
US20050223166A1 (en) | Storage control system, channel control device for storage control system, and data transfer device | |
JP5168630B2 (en) | Cache server control circuit and cache server control method for blade server system | |
JP4514222B2 (en) | Data storage system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |