US6973586B2 - System and method for automatic dynamic address switching - Google Patents

System and method for automatic dynamic address switching Download PDF

Info

Publication number
US6973586B2
US6973586B2 US10/134,414 US13441402A US6973586B2 US 6973586 B2 US6973586 B2 US 6973586B2 US 13441402 A US13441402 A US 13441402A US 6973586 B2 US6973586 B2 US 6973586B2
Authority
US
United States
Prior art keywords
pav
pprc
alias
secondary device
base
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime, expires
Application number
US10/134,414
Other versions
US20030204773A1 (en
Inventor
David B. Petersen
John A. Stanbi
Harry M. Yudenfriend
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US10/134,414 priority Critical patent/US6973586B2/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PETERSEN, DAVID B., STAUBI, JOHN A., YUDENFRIEND, HARRY H.
Publication of US20030204773A1 publication Critical patent/US20030204773A1/en
Application granted granted Critical
Publication of US6973586B2 publication Critical patent/US6973586B2/en
Adjusted expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/16Error detection or correction of the data by redundancy in hardware
    • G06F11/20Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
    • G06F11/2053Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant
    • G06F11/2056Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where persistent mass storage functionality or persistent mass storage control functionality is redundant by mirroring
    • G06F11/2069Management of state, configuration or failover
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • G06F3/0632Configuration or reconfiguration of storage systems by initialisation or re-initialisation of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0689Disk arrays, e.g. RAID, JBOD
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F2003/0697Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers device management, e.g. handlers, drivers, I/O schedulers

Definitions

  • CA Continuous availability
  • Attempts to achieve these attributes have been made utilizing hardware by enabling a system for redundancy with such mechanisms as multiple servers, multiple coupling facilities (CFS), multiple sysplex timers, multiple channel paths spread across multiple switches, etc.
  • Attempts to achieve these attributes have been made utilizing software by enabling a system for software redundancy with redundant z/OS (IBM's operating system for the mainframe environment that operates on zSeries processor) images and multiple software subsystems per z/OS, etc.
  • z/OS IBM's operating system for the mainframe environment that operates on zSeries processor
  • CA systems generally comprise disk subsystems that are a single point of failure. For example, where there is only one copy of disk resident data and the disk subsystem becomes nonfunctional for any reason, the system and/or the applications executing therein typically experience an outage even when the system's other components are redundant or fault tolerant.
  • Some CA systems including those comprising synchronous disk mirroring subsystems, such as those supporting Peer to Peer Remote Copy (PPRC) functions, reduce the opportunity for outages by having two copies of the data and the cluster spread across two geographical locations.
  • PPRC Peer to Peer Remote Copy
  • a first type of outage is a disk subsystem failure. If a PPRC enabled system experiences a primary disk subsystem failure (i.e., the primary disk subsystem is inaccessible causing an impact on service), required repairs can be performed on the primary disk subsystem while simultaneously performing a disruptive failover to use the secondary disk subsystem. Restoration of service typically requires less than one hour, which compares favorably to non-PPRC systems that typically require several hours before service can be restored. In addition, non-PPRC systems may experience logical contamination, such as permanent Input/Output (I/O) errors, which would also be present on the secondary PPRC volume and would require a data recovery action prior to the data being accessible.
  • I/O Input/Output
  • IBM DB2 will create a Logical Page List (LPL) entry for each table space that receives a permanent I/O error for which recovery is required.
  • LPL Logical Page List
  • a second type of outage that may be experienced is a site failure wherein the failed site includes disk subsystems necessary for continued operations.
  • a PPRC enabled system experiences a site failure because for example, z/OS images within a site become nonfunctional or the primary PPRC disk subsystem(s) are inaccessible
  • the operator on the PPRC enabled system can initiate a disruptive failover to the surviving site and restore service within one hour.
  • the failed site is restored, the original PPRC configuration is restored by performing a disruptive switch or using existing PPRC/dynamic address switching (P/DAS) functions.
  • P/DAS PPRC/dynamic address switching
  • a third type of outage that may be experienced is caused by disk subsystem maintenance.
  • disk subsystem maintenance there are at least two methods for proceeding.
  • the operator may perform a disruptive planned disk switch to use the secondary disk subsystem restoring service typically in less than one hour.
  • the majority of PPRC systems use this technique to minimize the time when their disaster recovery (D/R) readiness is disabled.
  • the system may also use existing PPRC P/DAS functions to transparently switch the secondary disk subsystem into use.
  • An embodiment of the present invention provides a continuous availability solution (in the event of a primary disk subsystem failure and planned maintenance) for transparent disaster recovery for both uni-geographically and multi-geographically located disk subsystems.
  • a method for automatic peer to peer address switching comprising: defining a secondary device as a logical alias of a primary device and performing the following steps concurrently for the primary-secondary device pair upon a determination that address switching is desired: terminating the device pair binding, terminating all logical alias bindings to the first device except the logical alias binding of the primary device to the secondary device, preventing the primary device from receiving I/O requests, and allowing the secondary device to receive I/O requests.
  • FIG. 1 shows an exemplary system including geographically dispersed logical volumes.
  • FIG. 2 shows exemplary logical volumes during initialization.
  • FIG. 3 shows exemplary logical volumes during initialization.
  • FIG. 4 shows exemplary logical volumes during address switching.
  • FIG. 5 shows exemplary logical volumes during address switching.
  • FIG. 6 is a flow chart of an exemplary initialization process.
  • FIGS. 7 and 8 comprise a flowchart of an exemplary address switching process.
  • FIGS. 9 and 10 comprise a flowchart of additional exemplary steps for the address switching process shown in FIGS. 7 and 8 .
  • Peer-to-Peer Dynamic Address Switching is a z/OS operating system function based on Dynamic Device Reconfiguration (DDR) and Peer-to-Peer Remote Copy (PPRC). It provides a means for installations to non-disruptively switch between devices in a duplex pair when the primary device needs to be made unavailable for reasons such as performing service or migrating from one subsystem to another.
  • P/DAS requires a device to be released (i.e. not actively reserved) prior to P/DAS's execution of operations on the device.
  • P/DAS performs various operations serially (i.e. operates on one device pair at a time) in order to manage the PPRC state of a set of devices.
  • a common solution for completing P/DAS functions is to suspend or terminate the applications requiring I/O access to data stored on the devices, perform operations including breaking the PPRC connections among the devices, and restarting the applications whereby the application's I/O access requests are redirected to the secondary volumes.
  • These operations generally require approximately fifteen seconds per device, plus one to two additional seconds for each system comprising the cluster.
  • several thousand PPRC pairs, for example will exceed the maximum amount of allotted time for continuous availability system requirements.
  • PIDAS requires automation routines to provide multi-system serialization via the IOACTION operator command in order to provide a synchronization point for all systems to switch devices at the same time in order to insure data consistency.
  • RAID5 disk subsystems may be used to provide for fault tolerance.
  • RAID subsystems are also subject to failure. Such failures may be due to errors in Licensed Internal Code (LIC, or micro-code) which is itself a single point of failure.
  • LIC Licensed Internal Code
  • a typical RAID5 disk subsystem is deployed to a single geographic location and thus is not likely to withstand geographical disasters (e.g. earth quakes, floods, bombs, etc.).
  • An embodiment of the present invention provides for improving CA and D/R attributes by masking logical subsystem outages while eliminating the need for several P/DAS requirements.
  • a feature of an embodiment of the present invention is to provide a higher rate of recovery from system failures by parallelizing the existing P/DAS recovery and by requiring less system serialization resulting from the removal of the requirement of using DDR for its switch-over capability.
  • Another feature of an embodiment of the present invention is to provide transparent recovery for disk subsystem failures.
  • An additional feature of an embodiment of the present invention is to enable a disk subsystem located in multiple geographical locations for in-parallel switching of all primary PPRC devices to all secondary PPRC devices in a minimal of time without disrupting applications requiring access to the disk subsystem.
  • An additional feature of an embodiment of the present invention is the ability to direct certain read I/O requests to secondary devices.
  • current systems require that I/O operations must be sent to a primary device for execution thereby resulting in a certain amount of latency especially where the primary device is remotely located from the source of the I/O request.
  • concurrency may be synchronous or asynchronous and that computer applications, programs, tasks, operations, and/or processes may initiate, execute, and terminate independent of one another.
  • An exemplary embodiment of the present invention includes the creation a single logical volume comprising primary and secondary devices without requiring a pseudo-online state and without requiring the swapping of UCB contents (subchannel number, CHPIDS, etc . . . ).
  • Single logical volumes allow secondary devices to execute read-only requests when the I/O requestor does not require extent serialization.
  • the single logical volumes construct of this exemplary embodiment improves performance by eliminating queue time when there are not enough aliases to execute waiting requests on the primary device alone.
  • an exemplary embodiment of the present invention provides for improved system performance through the service of read requests by secondary devices prior to an event requiring recovery.
  • An exemplary embodiment of the present invention does not require high level automation to respond to messages (IOS002A) and execute a response, and instead includes the flipping of bits on or off to block I/O to primary (base) devices and to route requests to secondary alias devices.
  • the flipping of bits causes the IOS to block I/O to the base, allow I/O to alias, and depend on the terminate pair PSF CCW command to be broadcast to all sharing systems to the secondary device.
  • the primary device presents a device state transition (DST) interrupt to all sharing systems, before accepting any other commands. This DST interrupt notifies the systems that they need to check the state of the devices and perform address switching if secondaries have become active.
  • DST device state transition
  • An exemplary embodiment comprises an “IBM S/390” type computer system including at least one logical subsystem wherein the logical subsystem's devices are located in one or more geographical locations.
  • an exemplary computer system is enabled for using a Parallel Access Volume-Alias (PAV-Alias).
  • PAV-Alias Parallel Access Volume-Alias
  • FIGS. 6 through 10 show exemplary steps for a single computer system comprising a single logical volume, it is understood that the embodiment envisioned by FIGS.
  • 1 through 10 comprise multiple computer systems and multiple logical volumes. It is further understood that the combination of a PAV-Base device and at least one PAV-Alias device may be referred to as a logical volume. An alternate embodiment may be implemented concurrently and in-parallel across multiple computer systems and multiple logical volumes.
  • FIG. 1 shows an exemplary system including geographically dispersed logical volumes.
  • Site A at 10 includes computers/processors 16 and storage devices 18 , 20 connected to data networks 40 , 42 via data communications lines 22 .
  • Site B at 12 includes computers/processors 26 and storage devices 28 , 30 connected to data networks 40 , 42 via data communications lines 32 .
  • the storage devices of site A at 18 , 20 and site B at 28 , 30 are also in communication via PPRC links 24 , 34 . It is understood that PPRC links 24 and 34 are exemplary and the lack thereof does not exceed the scope of the present disclosure.
  • IOS Input/Output Supervisor
  • computer program code for automatic dynamic address switching is located in a computer usable storage medium 14 that is in communication with and accessible by at least one computer/processor 16 , 26 via a propagated data communication signal.
  • FIG. 2 shows exemplary logical volumes during initialization.
  • PAV-Base 105 UCB and PAV-Alias 101 , 102 UCBs are bound together during system initialization to form a single logical volume for scheduling I/O requests. I/O can be started over any free PAV-Base or PAV-Alias to a PAV device.
  • a PAV-Base 105 UCB contains the device number that is surfaced to the systems operator and applications as the device number representing the logical volume.
  • a device number is a four digit hexidecimal number that is used to uniquely identify a device to the operating system and machine.
  • PAV-Alias 101 , 102 UCBs represent alias unit addresses that are defined in the DASD subsystem to allow access to the same logical volume.
  • the PAV-Base points 113 to the first PAV-Alias on the PAV-Alias queue 111 .
  • PAV-Aliases are formed into a circular queue 111 in order to easily be able to find all PAV-Aliases for a single logical volume.
  • the PPRC secondary device 106 is also defined (to the operating system) as a PAV-Base, however, the PPRC secondary device is not yet brought online or made eligible for I/O operations (i.e. normal reads/writes are rejected with unit check status and sense data indicating the device is a PPRC secondary).
  • a PPRC secondary device bound as a PAV-Alias may be hereinafter be referred to as a PAV-Secondary.
  • the PAV-Secondary 106 also has a set of PAV-Alias devices 103 , 104 that will be used when the secondary device is set to a usable state known as a simplex state after a system error causing event occurs, however they are not initially bound to either the PAV-Base 105 or the PAV Secondary 106 .
  • FIG. 3 also shows exemplary logical volumes during initialization.
  • each PPRC secondary device 206 that is defined as a PAV-Base device (PAV secondary) is bound 212 to its corresponding PPRC primary device 205 as if the PPRC secondary device 206 were a PAV-alias device and the corresponding PPRC primary device 205 were defined as a PAV-base device.
  • This binding process is accomplished by extending the PAV-Alias circular queue 211 to include the PAV-Alias device 206 .
  • the PAV-Secondary device 206 UCB is then updated to point 212 to the PAV-Base PPRC primary device 205 .
  • the UCB Look Up Table entry for the PPRC secondary 205 is updated to indicate the UCB is now PAV-Alias C so that it won't be found by applications searching for PAV-Base UCBs.
  • a PAV-Base UCB for a secondary device is referred to as a PAV-Secondary UCB.
  • a Perform Subsystem Function (PSF) command is issued to the PPRC secondary device 206 to instruct it that it can accept and execute read requests that do not require extent serialization.
  • Extent serialization is an attribute of the Extended Count-Key Data (ECKD) architecture where two or more channel programs that access the same tracks (extents) on the same disk cannot execute concurrently if either channel program allows for a write.
  • ESS shark
  • the PPRC secondary device 206 UCB is then marked as “blocked” so that only I/O requests that are “read only” and do not require extent serialization may be executed.
  • FIG. 4 shows exemplary logical volumes during operation.
  • a recovery process is initiated.
  • the current PAV-Alias devices 301 , 302 UCBs are unbound from the PAV-Base device 305 (PPRC primary).
  • the PPRC secondary device 306 UCB remains as the only PAV-Alias bound to the PAV-Base device 305 (PPRC primary).
  • the circular queue for the PAV-Aliases contains only PAV-Secondary 311 .
  • the terminate PPRC pair PSF command is issued to the PPRC secondary device 306 in order to break the PPRC pair and set it into the simplex state.
  • the PAV-Base device 305 UCB is marked so that no I/O requests can be executed to it.
  • the PAV-Secondary device 306 (former PPRC secondary, now simplex state) is now marked as unblocked so that all I/O requests can be started.
  • the Manage Alias PSF command is issued to the PAV-Alias device 306 in order to force a device state transition from every PAV-Alias associated with it.
  • Each PAV-Alias device presents an interrupt 303 , 304 and is bound to the PAV-Base device 305 , which now corresponds to the PAV-Alias device 306 .
  • FIG. 5 also shows exemplary logical volumes during operation.
  • PAV-Alias devices 403 , 404 have been bound to the PAV-Base device 405 that includes the PAV-Alias 406 (former PPRC secondary). All I/O requests from applications allocated to the PAV-Base device 405 (former PPRC primary) are now directed to the former PPRC secondary device 406 and all of its PAV-Alias devices 403 , 404 .
  • FIG. 6 is a flow chart of an exemplary initialization process in accordance with the present invention.
  • system initialization after all the PAV-Alias devices have been processed (bound to their corresponding PAV-Bases) following the establishment of a PPRC device pair at 502 , or after a PPRC primary device is varied online at 503 , the operating system locates the PPRC secondary devices to initialize.
  • the Input/Output Supervisor (IOS) component of the operating system finds every PPRC secondary device located at 504 . For each unbound PPRC secondary device located at 504 , the corresponding PAV-Base UCB for the PPRC primary device is located at 505 .
  • IOS Input/Output Supervisor
  • the PPRC secondary device is ignored and processing continues to the next PPRC secondary device at 511 . Once all the PPRC secondary devices have been processed, processing terminates.
  • the PAV-Base PPRC secondary UCB is bound to the PAV-Base at 507 as described for FIG. 2 .
  • the PPRC secondary device is marked as a PAV-Secondary device UCB at 508 .
  • the PAV-Secondary UCB is marked at 509 such that all I/O to the PAV-Secondary UCB except read-only requests that do not require extent serialization are prevented by alternating flag bit in the I/O request block.
  • the UCB Look Up Table entry for the PPRC secondary device is updated to mark the PAV-Secondary device as a PAV-Alias device at 510 so that the UCB is not findable by applications. If it is determined that there are additional devices at 511 , steps 504 – 510 are repeated for each such determined device. It is understood that the initialization steps described herein are completed prior to the occurrence of a failure requiring a recovery and thereby reduce the number of steps and the amount of time necessary for recovery from the failure.
  • FIGS. 7 and 8 comprise a flowchart of an exemplary recovery process in accordance with the present invention.
  • the first host computer system sharing access to the devices being recovered) to detect a permanent I/O error (e.g. deferred condition code 3 for application I/O request 600 ) begins a recovery process at 601 .
  • the PAV-Base UCB is quiesced at 602 so that no new I/O requests can be started to the logical volume.
  • a read-subsystem-status PSF command is issued to the PAV-Secondary UCB at 603 to determine the state of the device. If the PAV-Secondary is still in a PPRC pair with the primary at 604 then the terminate pair PSF is issued at 605 .
  • the sharing systems will see a device state transition when the PPRC pair is broken.
  • the terminate pair PSF is issued at 605
  • all future I/O is blocked to the PAV-Base at 606 and all I/O is now allowed to the PAV-Secondary at 607 .
  • all current PAV-Alias UCBs, except for the PAV-Secondary UCB are unbound at 609 .
  • the manage alias PSF is then issued to the PAV-Secondary device at 610 in order to force a device state transition from all the PAV-Alias UCBs for the PAV-Secondary device. This causes interrupts to be presented for the PAV-Alias devices so that they can become bound to the PAV-Secondary.
  • the device is unquiesced (I/O activity is allowed to resume) at 611 .
  • step 606 If the device was not found to be a PPRC pair at 604 but is found to be in the simplex state at 608 , another sharing system must have issued the terminate pair PSF first. In such a case, processing continues beginning with step 606 and as described above. If the device state could not be determined at 608 processing continues beginning with step 611 and as described above.
  • FIGS. 9 and 10 comprise a flowchart of additional exemplary steps for the recovery process of FIGS. 7 and 8 .
  • the PPRC pair is terminated by issuing the terminate pair PSF (shown at step 605 of FIG. 7 )
  • all systems with access to the device receive a device state transition interrupt at 700 .
  • the interrupt handler gets control at 701 and quiesces the PAV-Base UCB at 702 so that no application I/O requests can be started for the logical volume. If it is determined that the interrupt occurred on a PAV-Secondary UCB at 703 then the read-subsystem-data PSF is issued at 704 to determine if the PAV-Secondary is still in a PPRC pair.
  • the PAV-Secondary is not determined to be in a PPRC pair at 705 then all PAV-Alias UCBs, except for the PAV-Secondary, are unbound from the PAV-Base at 706 . All future I/O requests are blocked from executing to the PAV-Base at 707 (and similar to step 606 ).
  • the PAV-Secondary UCB is marked to allow future I/O requests to execute to the PAV-Secondary at 708 .
  • the manage alias PSF command is issued at 709 to force a device state transition from all the PAV-Alias UCBs for the PAV-Secondary device. The device is unquiesced to resume application I/O for the logical volume at 710 .
  • PAV-Secondary UCBs When PAV-Secondary UCBs are active (I/O is allowed), PAV-Alias UCBs for the PAV-Secondary are bound to the PAV-Base that also binds the PAV-Secondary. Upon completion of traditional processing, processing continues beginning with step 710 and as described above.
  • the computer program code segments configure the microprocessor to create specific logic circuits.

Abstract

A method for automatic peer to peer address switching, comprising: defining a secondary device as a logical alias of a primary device and performing the following steps concurrently for the primary-secondary device pair upon a determination that address switching is desired: terminating the device pair binding, terminating all logical alias bindings to the first device except the logical alias binding of the primary device to the secondary device, preventing the primary device from receiving I/O requests, and allowing the secondary device to receive I/O requests.

Description

This application is related to U.S. Pat. No. 5,870,537, “Concurrent Switch to Shadowed Device for Storage Controller and Device Errors”, assigned to the assignee of the present application, the contents of which are herein incorporated by reference.
CROSS REFERENCE TO RELATED APPLICATIONS
This application is related to United States Patent Application, “System and Method for Concurrent Logical Device Swapping”, Ser. No. 10/134,254, filed Apr. 29, 2002 and assigned to the assignee of the present application, the contents of which are herein incorporated by reference.
BACKGROUND
Continuous availability (CA) is the attribute of a system or cluster of systems to provide high availability (i.e., mask unplanned outages from an end-user perspective) and continuous operations (i.e., mask planned maintenance from an end-user perspective). Attempts to achieve these attributes have been made utilizing hardware by enabling a system for redundancy with such mechanisms as multiple servers, multiple coupling facilities (CFS), multiple sysplex timers, multiple channel paths spread across multiple switches, etc. Attempts to achieve these attributes have been made utilizing software by enabling a system for software redundancy with redundant z/OS (IBM's operating system for the mainframe environment that operates on zSeries processor) images and multiple software subsystems per z/OS, etc.
Existing CA systems generally comprise disk subsystems that are a single point of failure. For example, where there is only one copy of disk resident data and the disk subsystem becomes nonfunctional for any reason, the system and/or the applications executing therein typically experience an outage even when the system's other components are redundant or fault tolerant. Some CA systems, including those comprising synchronous disk mirroring subsystems, such as those supporting Peer to Peer Remote Copy (PPRC) functions, reduce the opportunity for outages by having two copies of the data and the cluster spread across two geographical locations.
There are several types of outages that a CA system may experience. A first type of outage is a disk subsystem failure. If a PPRC enabled system experiences a primary disk subsystem failure (i.e., the primary disk subsystem is inaccessible causing an impact on service), required repairs can be performed on the primary disk subsystem while simultaneously performing a disruptive failover to use the secondary disk subsystem. Restoration of service typically requires less than one hour, which compares favorably to non-PPRC systems that typically require several hours before service can be restored. In addition, non-PPRC systems may experience logical contamination, such as permanent Input/Output (I/O) errors, which would also be present on the secondary PPRC volume and would require a data recovery action prior to the data being accessible. For example, IBM DB2 will create a Logical Page List (LPL) entry for each table space that receives a permanent I/O error for which recovery is required. Referring again to a system enabled with PPRC, once the primary disk subsystem is repaired the original PPRC configuration is restored by performing a disruptive switch or using existing PPRC/dynamic address switching functions.
A second type of outage that may be experienced is a site failure wherein the failed site includes disk subsystems necessary for continued operations. When a PPRC enabled system experiences a site failure because for example, z/OS images within a site become nonfunctional or the primary PPRC disk subsystem(s) are inaccessible, the operator on the PPRC enabled system can initiate a disruptive failover to the surviving site and restore service within one hour. When the failed site is restored, the original PPRC configuration is restored by performing a disruptive switch or using existing PPRC/dynamic address switching (P/DAS) functions.
A third type of outage that may be experienced is caused by disk subsystem maintenance. When a PPRC enabled system requires disk subsystem maintenance, there are at least two methods for proceeding. The operator may perform a disruptive planned disk switch to use the secondary disk subsystem restoring service typically in less than one hour. The majority of PPRC systems use this technique to minimize the time when their disaster recovery (D/R) readiness is disabled. The system may also use existing PPRC P/DAS functions to transparently switch the secondary disk subsystem into use.
Existing PPRC and z/OS P/DAS mechanisms, process each PPRC volume pair switch sequentially as a result of z/OS Input/Output Services Component serialization logic thus requiring approximately twenty to thirty seconds to switch each PPRC pair. A freeze function is issued to prevent I/O for the duration of the P/DAS processing due to primary disks being spread across two sites, resulting in the potential for a lack of Disaster Recovery readiness lasting for a significant period of time. For example, assuming that a PPRC enterprise wanted to perform maintenance on one disk subsystem that contained 1024 PPRC volumes and P/DAS were used to perform a transparent switch, the elapsed P/DAS processing time would be equal to 5.7–8.5 hours (1024 volumes * 20–30 seconds processing time per volume pair). Additionally, there are requirements, as described in the IBM publication DFSMS/MVS V1 Advanced Copy Services (SC35-0355), that must be met for P/DAS to work thereby making it very unlikely that a production PPRC disk subsystem pair can be switched using P/DAS without manual intervention. Because many enterprises are unable to tolerate having their D/R readiness disabled for several hours, they often elect to perform a disruptive planned disk switch instead of using the P/DAS function. Once the disk subsystem maintenance is completed, the operator will restore the original PPRC configuration by performing a disruptive switch or use the existing P/DAS function.
An embodiment of the present invention provides a continuous availability solution (in the event of a primary disk subsystem failure and planned maintenance) for transparent disaster recovery for both uni-geographically and multi-geographically located disk subsystems.
SUMMARY
A method for automatic peer to peer address switching, comprising: defining a secondary device as a logical alias of a primary device and performing the following steps concurrently for the primary-secondary device pair upon a determination that address switching is desired: terminating the device pair binding, terminating all logical alias bindings to the first device except the logical alias binding of the primary device to the secondary device, preventing the primary device from receiving I/O requests, and allowing the secondary device to receive I/O requests.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows an exemplary system including geographically dispersed logical volumes.
FIG. 2 shows exemplary logical volumes during initialization.
FIG. 3 shows exemplary logical volumes during initialization.
FIG. 4 shows exemplary logical volumes during address switching.
FIG. 5 shows exemplary logical volumes during address switching.
FIG. 6 is a flow chart of an exemplary initialization process.
FIGS. 7 and 8 comprise a flowchart of an exemplary address switching process.
FIGS. 9 and 10 comprise a flowchart of additional exemplary steps for the address switching process shown in FIGS. 7 and 8.
DETAILED EMBODIMENT
Peer-to-Peer Dynamic Address Switching (P/DAS) is a z/OS operating system function based on Dynamic Device Reconfiguration (DDR) and Peer-to-Peer Remote Copy (PPRC). It provides a means for installations to non-disruptively switch between devices in a duplex pair when the primary device needs to be made unavailable for reasons such as performing service or migrating from one subsystem to another. P/DAS requires a device to be released (i.e. not actively reserved) prior to P/DAS's execution of operations on the device. P/DAS performs various operations serially (i.e. operates on one device pair at a time) in order to manage the PPRC state of a set of devices. When a Geographically Dispersed logical subsystem is utilized by an enterprise, a common solution for completing P/DAS functions is to suspend or terminate the applications requiring I/O access to data stored on the devices, perform operations including breaking the PPRC connections among the devices, and restarting the applications whereby the application's I/O access requests are redirected to the secondary volumes. These operations generally require approximately fifteen seconds per device, plus one to two additional seconds for each system comprising the cluster. As a result, several thousand PPRC pairs, for example, will exceed the maximum amount of allotted time for continuous availability system requirements. In addition, PIDAS requires automation routines to provide multi-system serialization via the IOACTION operator command in order to provide a synchronization point for all systems to switch devices at the same time in order to insure data consistency.
In addition to PPRC, RAID5 disk subsystems may be used to provide for fault tolerance. However, RAID subsystems are also subject to failure. Such failures may be due to errors in Licensed Internal Code (LIC, or micro-code) which is itself a single point of failure. Additionally, a typical RAID5 disk subsystem is deployed to a single geographic location and thus is not likely to withstand geographical disasters (e.g. earth quakes, floods, bombs, etc.).
An embodiment of the present invention provides for improving CA and D/R attributes by masking logical subsystem outages while eliminating the need for several P/DAS requirements. A feature of an embodiment of the present invention is to provide a higher rate of recovery from system failures by parallelizing the existing P/DAS recovery and by requiring less system serialization resulting from the removal of the requirement of using DDR for its switch-over capability. Another feature of an embodiment of the present invention is to provide transparent recovery for disk subsystem failures. An additional feature of an embodiment of the present invention is to enable a disk subsystem located in multiple geographical locations for in-parallel switching of all primary PPRC devices to all secondary PPRC devices in a minimal of time without disrupting applications requiring access to the disk subsystem. An additional feature of an embodiment of the present invention is the ability to direct certain read I/O requests to secondary devices. In contrast, current systems require that I/O operations must be sent to a primary device for execution thereby resulting in a certain amount of latency especially where the primary device is remotely located from the source of the I/O request. It is understood that these features are merely exemplary and not intended to limit embodiments of the invention.
For purposes of clarity and explanation, “in-parallel” (hereinafter “concurrent”) computer applications, programs, tasks, operations, and/or processes refers to the concurrent execution of two or more of the same. Furthermore, it is understood to one of ordinary skill in the art that concurrency may be synchronous or asynchronous and that computer applications, programs, tasks, operations, and/or processes may initiate, execute, and terminate independent of one another.
An exemplary embodiment of the present invention includes the creation a single logical volume comprising primary and secondary devices without requiring a pseudo-online state and without requiring the swapping of UCB contents (subchannel number, CHPIDS, etc . . . ). Single logical volumes allow secondary devices to execute read-only requests when the I/O requestor does not require extent serialization. Thus, the single logical volumes construct of this exemplary embodiment improves performance by eliminating queue time when there are not enough aliases to execute waiting requests on the primary device alone.
In addition, an exemplary embodiment of the present invention provides for improved system performance through the service of read requests by secondary devices prior to an event requiring recovery.
An exemplary embodiment of the present invention does not require high level automation to respond to messages (IOS002A) and execute a response, and instead includes the flipping of bits on or off to block I/O to primary (base) devices and to route requests to secondary alias devices. For example, when the first device experiences an error (i.e. non operational condition), the flipping of bits causes the IOS to block I/O to the base, allow I/O to alias, and depend on the terminate pair PSF CCW command to be broadcast to all sharing systems to the secondary device. When the terminate pair CCW is executed the primary device presents a device state transition (DST) interrupt to all sharing systems, before accepting any other commands. This DST interrupt notifies the systems that they need to check the state of the devices and perform address switching if secondaries have become active.
An exemplary embodiment comprises an “IBM S/390” type computer system including at least one logical subsystem wherein the logical subsystem's devices are located in one or more geographical locations. Moreover, an exemplary computer system is enabled for using a Parallel Access Volume-Alias (PAV-Alias). It is understood that a PAV-Alias as described in the present application is an example of a logical device alias and that any system enabled for using any logical device alias construct or the equivalent thereof may be used without exceeding the scope of the present application. While for purposes of explanation and example, FIGS. 6 through 10 show exemplary steps for a single computer system comprising a single logical volume, it is understood that the embodiment envisioned by FIGS. 1 through 10 comprise multiple computer systems and multiple logical volumes. It is further understood that the combination of a PAV-Base device and at least one PAV-Alias device may be referred to as a logical volume. An alternate embodiment may be implemented concurrently and in-parallel across multiple computer systems and multiple logical volumes.
FIG. 1 shows an exemplary system including geographically dispersed logical volumes. Site A at 10 includes computers/processors 16 and storage devices 18, 20 connected to data networks 40, 42 via data communications lines 22. Site B at 12 includes computers/processors 26 and storage devices 28, 30 connected to data networks 40, 42 via data communications lines 32. The storage devices of site A at 18, 20 and site B at 28,30 are also in communication via PPRC links 24, 34. It is understood that PPRC links 24 and 34 are exemplary and the lack thereof does not exceed the scope of the present disclosure. Under exemplary circumstances wherein primary storage devices 28, 30 are located at site B at 12, a failure 36 is detected from Input/Output Supervisor (IOS) symptoms such as those causing IOS002A message 38 indicating that storage devices 28, 30 do not have operational paths with which to communicate with the remainder of the geographically dispersed storage subsystem. In an exemplary embodiment, computer program code for automatic dynamic address switching is located in a computer usable storage medium 14 that is in communication with and accessible by at least one computer/ processor 16, 26 via a propagated data communication signal.
FIG. 2 shows exemplary logical volumes during initialization. PAV-Base 105 UCB and PAV- Alias 101,102 UCBs are bound together during system initialization to form a single logical volume for scheduling I/O requests. I/O can be started over any free PAV-Base or PAV-Alias to a PAV device. A PAV-Base 105 UCB contains the device number that is surfaced to the systems operator and applications as the device number representing the logical volume. A device number is a four digit hexidecimal number that is used to uniquely identify a device to the operating system and machine. PAV- Alias 101, 102 UCBs represent alias unit addresses that are defined in the DASD subsystem to allow access to the same logical volume. The PAV-Base points 113 to the first PAV-Alias on the PAV-Alias queue 111. PAV-Aliases are formed into a circular queue 111 in order to easily be able to find all PAV-Aliases for a single logical volume. Each PAV- Alias 101, 102 point, 112 back to the PAV-Base 105 device that they are associated with in order to be able to quickly process interrupts and errors when I/O operations complete.
The PPRC secondary device 106 is also defined (to the operating system) as a PAV-Base, however, the PPRC secondary device is not yet brought online or made eligible for I/O operations (i.e. normal reads/writes are rejected with unit check status and sense data indicating the device is a PPRC secondary). For purposes of this invention a PPRC secondary device bound as a PAV-Alias may be hereinafter be referred to as a PAV-Secondary. The PAV-Secondary 106 also has a set of PAV- Alias devices 103, 104 that will be used when the secondary device is set to a usable state known as a simplex state after a system error causing event occurs, however they are not initially bound to either the PAV-Base 105 or the PAV Secondary 106.
FIG. 3 also shows exemplary logical volumes during initialization. During system initialization or after a device is varied online and after PAV-Aliases are bound to the PAV-Base as shown in FIG. 2, each PPRC secondary device 206 that is defined as a PAV-Base device (PAV secondary) is bound 212 to its corresponding PPRC primary device 205 as if the PPRC secondary device 206 were a PAV-alias device and the corresponding PPRC primary device 205 were defined as a PAV-base device. This binding process is accomplished by extending the PAV-Alias circular queue 211 to include the PAV-Alias device 206. The PAV-Secondary device 206 UCB is then updated to point 212 to the PAV-Base PPRC primary device 205. The UCB Look Up Table entry for the PPRC secondary 205 is updated to indicate the UCB is now PAV-Alias C so that it won't be found by applications searching for PAV-Base UCBs. A PAV-Base UCB for a secondary device is referred to as a PAV-Secondary UCB.
Once the PAV-Secondary device (PPRC secondary) 206 has been bound to the PPRC primary device 205, a Perform Subsystem Function (PSF) command is issued to the PPRC secondary device 206 to instruct it that it can accept and execute read requests that do not require extent serialization. Extent serialization is an attribute of the Extended Count-Key Data (ECKD) architecture where two or more channel programs that access the same tracks (extents) on the same disk cannot execute concurrently if either channel program allows for a write. ESS (shark) allows for applications to bypass extent serialization in case where the application guarantees the required serialization this bypass allows for a reduction in overhead. For a further discussion of extent serialization U.S. Pat. No. 6,240,467, the contents of which herein incorporated by reference, may be reviewed. The PPRC secondary device 206 UCB is then marked as “blocked” so that only I/O requests that are “read only” and do not require extent serialization may be executed.
FIG. 4 shows exemplary logical volumes during operation. Once a permanent I/O error is received for a device, a recovery process is initiated. The current PAV- Alias devices 301, 302 UCBs are unbound from the PAV-Base device 305 (PPRC primary). The PPRC secondary device 306 UCB remains as the only PAV-Alias bound to the PAV-Base device 305 (PPRC primary). The circular queue for the PAV-Aliases contains only PAV-Secondary 311. The terminate PPRC pair PSF command is issued to the PPRC secondary device 306 in order to break the PPRC pair and set it into the simplex state. The PAV-Base device 305 UCB is marked so that no I/O requests can be executed to it. The PAV-Secondary device 306 (former PPRC secondary, now simplex state) is now marked as unblocked so that all I/O requests can be started. The Manage Alias PSF command is issued to the PAV-Alias device 306 in order to force a device state transition from every PAV-Alias associated with it. Each PAV-Alias device presents an interrupt 303, 304 and is bound to the PAV-Base device 305, which now corresponds to the PAV-Alias device 306.
FIG. 5 also shows exemplary logical volumes during operation. PAV- Alias devices 403, 404 have been bound to the PAV-Base device 405 that includes the PAV-Alias 406 (former PPRC secondary). All I/O requests from applications allocated to the PAV-Base device 405 (former PPRC primary) are now directed to the former PPRC secondary device 406 and all of its PAV- Alias devices 403, 404.
FIG. 6 is a flow chart of an exemplary initialization process in accordance with the present invention. During system initialization at 501, after all the PAV-Alias devices have been processed (bound to their corresponding PAV-Bases) following the establishment of a PPRC device pair at 502, or after a PPRC primary device is varied online at 503, the operating system locates the PPRC secondary devices to initialize. The Input/Output Supervisor (IOS) component of the operating system finds every PPRC secondary device located at 504. For each unbound PPRC secondary device located at 504, the corresponding PAV-Base UCB for the PPRC primary device is located at 505. If there is no PPRC primary, or the PPRC primary is not defined to be a PAV-Base device UCB at 506, the PPRC secondary device is ignored and processing continues to the next PPRC secondary device at 511. Once all the PPRC secondary devices have been processed, processing terminates.
If a PAV-Base UCB for the PPRC primary device is found at 506 the PAV-Base PPRC secondary UCB is bound to the PAV-Base at 507 as described for FIG. 2. Next, the PPRC secondary device is marked as a PAV-Secondary device UCB at 508. The PAV-Secondary UCB is marked at 509 such that all I/O to the PAV-Secondary UCB except read-only requests that do not require extent serialization are prevented by alternating flag bit in the I/O request block. The UCB Look Up Table entry for the PPRC secondary device is updated to mark the PAV-Secondary device as a PAV-Alias device at 510 so that the UCB is not findable by applications. If it is determined that there are additional devices at 511, steps 504510 are repeated for each such determined device. It is understood that the initialization steps described herein are completed prior to the occurrence of a failure requiring a recovery and thereby reduce the number of steps and the amount of time necessary for recovery from the failure.
FIGS. 7 and 8 comprise a flowchart of an exemplary recovery process in accordance with the present invention. The first host computer system (sharing access to the devices being recovered) to detect a permanent I/O error (e.g. deferred condition code 3 for application I/O request 600) begins a recovery process at 601. The PAV-Base UCB is quiesced at 602 so that no new I/O requests can be started to the logical volume. A read-subsystem-status PSF command is issued to the PAV-Secondary UCB at 603 to determine the state of the device. If the PAV-Secondary is still in a PPRC pair with the primary at 604 then the terminate pair PSF is issued at 605. The sharing systems will see a device state transition when the PPRC pair is broken. After the terminate pair PSF is issued at 605, all future I/O is blocked to the PAV-Base at 606 and all I/O is now allowed to the PAV-Secondary at 607. Next, all current PAV-Alias UCBs, except for the PAV-Secondary UCB are unbound at 609. The manage alias PSF is then issued to the PAV-Secondary device at 610 in order to force a device state transition from all the PAV-Alias UCBs for the PAV-Secondary device. This causes interrupts to be presented for the PAV-Alias devices so that they can become bound to the PAV-Secondary. The device is unquiesced (I/O activity is allowed to resume) at 611.
If the device was not found to be a PPRC pair at 604 but is found to be in the simplex state at 608, another sharing system must have issued the terminate pair PSF first. In such a case, processing continues beginning with step 606 and as described above. If the device state could not be determined at 608 processing continues beginning with step 611 and as described above.
FIGS. 9 and 10 comprise a flowchart of additional exemplary steps for the recovery process of FIGS. 7 and 8. When the PPRC pair is terminated by issuing the terminate pair PSF (shown at step 605 of FIG. 7), all systems with access to the device receive a device state transition interrupt at 700. The interrupt handler gets control at 701 and quiesces the PAV-Base UCB at 702 so that no application I/O requests can be started for the logical volume. If it is determined that the interrupt occurred on a PAV-Secondary UCB at 703 then the read-subsystem-data PSF is issued at 704 to determine if the PAV-Secondary is still in a PPRC pair. If the PAV-Secondary is not determined to be in a PPRC pair at 705 then all PAV-Alias UCBs, except for the PAV-Secondary, are unbound from the PAV-Base at 706. All future I/O requests are blocked from executing to the PAV-Base at 707 (and similar to step 606). Next, the PAV-Secondary UCB is marked to allow future I/O requests to execute to the PAV-Secondary at 708. The manage alias PSF command is issued at 709 to force a device state transition from all the PAV-Alias UCBs for the PAV-Secondary device. The device is unquiesced to resume application I/O for the logical volume at 710.
If it is determined that the interrupt did not occur on a PAV-Secondary UCB at 703 or if the PAV-Secondary is determined to be in a PPRC pair at 705, traditional processing at 711 is performed. Traditional processing consists of validating the device self description data for currently bound PAV-Alias UCBs to verify that the UCBs are bound to the current logical volume. If the device state transition occurred for an unbound PAV-Alias UCB, the correct PAV-Base is found and the PAV-Alias is bound to it. When PAV-Secondary UCBs are active (I/O is allowed), PAV-Alias UCBs for the PAV-Secondary are bound to the PAV-Base that also binds the PAV-Secondary. Upon completion of traditional processing, processing continues beginning with step 710 and as described above.
The description applying the above embodiments is merely illustrative. As described above, embodiments in the form of computer-implemented processes and apparatuses for practicing those processes may be included. Also included may be embodiments in the form of computer program code containing instructions embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other computer-readable storage medium, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. Also included may be embodiments in the form of computer program code, for example, whether stored in a storage medium, loaded into and/or executed by a computer, or as a data signal transmitted, whether a modulated carrier wave or not, over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the computer program code is loaded into and executed by a computer, the computer becomes an apparatus for practicing the invention. When implemented on a general-purpose microprocessor, the computer program code segments configure the microprocessor to create specific logic circuits.
While the invention has been described with reference to exemplary embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiments disclosed for carrying out this invention, but that the invention will include all embodiments falling within the scope of the appended claims.

Claims (21)

1. A method for automatic peer to peer address switching, comprising:
performing an initialization for at least one primary-secondary device pair, the initialization including:
defining said second device as a logical alias of said first device, said first device being a primary device of said device pair and said second device being a secondary device of said device pair, said primary device and said secondary device forming said device pair, and
preventing said second device from accepting I/O requests; and
performing an address switching for said primary-secondary device pair upon a determination that said address switching is desired, the address switching including:
terminating said device pair binding,
terminating all logical alias bindings to said first device except said logical alias binding of said first device to said second device,
preventing said first device from receiving I/O requests, and
allowing said second device to receive I/O requests.
2. A method as in claim 1 wherein said initialization is performed concurrently for multiple device pairs.
3. A method as in claim 1 wherein said address switching is performed concurrently for multiple device pairs.
4. A method as in claim 1 wherein said device pair is a PPRC device pair.
5. A method as in claim 1 wherein said first device is a PAV-Base to said second device and said second device is a PAV-Alias to said first device.
6. A method as in claim 1 wherein said second device is defined as a PAV-Secondary to said first device.
7. A method as in claim 5 wherein said second device is a PAV-Base to at least one PAV-Alias.
8. A method as in claim 7 wherein said initialization further comprises terminating the logical binding of said as PAV-Base to said PAV-Alias.
9. A method as in claim 8 wherein said address switching further comprises binding said unbound PAV-Alias to said first device.
10. A method of using a PPRC secondary device to respond to read requests, comprising:
setting said PPRC secondary device as a PAV-Secondary device for a PPRC primary device, said PPRC primary device being defined as a PAV-Base and being bound to a first set of PAV-Alias devices, said PPRC secondary device having defined a second set of PAV-Alias devices said secondary device not being in an online state prior to said setting;
including said PAV-Secondary device in a PAV-Alias queue, said PAV-Alias queue being a PAV-Alias queue for said PAV-Base device;
updating a UCB lookup entry, said update indicating that said PPRC secondary device is defined as a PAV-Alias; and
instructing said PPRC secondary device to accept and respond to read requests.
11. A method as in claim 10 wherein said PPRC secondary device accepts and responds only to read requests not requiring extent serialization.
12. A method for automatic peer to peer address switching, comprising:
performing an initialization for at least one PPRC device pair, the initialization including:
setting said PPRC secondary device as a PAV-Secondary device for a PPRC primary device, said PPRC primary device being defined as a PAV-Base and being bound to a first set of PAV-Alias devices, said PPRC secondary device having defined a second set of PAV-Alias devices said secondary device not being in an online state prior to said setting,
including said PAV-Secondary device in a PAV-Alias queue, said PAV-Alias queue being a PAV-Alias queue for said PAV-base device, and
updating a UCB lookup entry, said update indicating that said PPRC secondary device is defined as a PAV-Alias; and
performing an address switching for said PPRC device pair upon a determination that said address switching is desired, the address switching including:
unbinding said first set of PAV-Alias devices from said PAV-Base,
terminating a PPRC binding, said PPRC binding defining said PPRC primary device and said PPRC secondary device as a PPRC pair,
blocking I/O requests directed to said PAV-Base, and
allowing I/O requests to said PAV-Secondary.
13. A method as in claim 12 wherein said initialization is performed concurrently for multiple device pairs.
14. A method as in claim 12 wherein said address switching is performed concurrently for multiple device pairs.
15. A method as in claim 12, wherein said initialization further comprises instructing said PPRC secondary device to accept and respond to read requests.
16. A method as in claim 15 further wherein said PPRC secondary device accepts and responds only to read requests not requiring extent serialization.
17. A method as in claim 12 wherein address switching further comprises including said second set of PAV-Alias devices in said PAV-Alias queue.
18. A system for automatic peer to peer address switching, comprising:
a first device and a second device, said first device being a PPRC primary device, said second device being a PPRC secondary device, said PPRC primary and said PPRC secondary devices being a PPRC device pair; and
at least one processor, said processor being in communication with said PPRC primary device and said PPRC secondary device, said processor for executing computer program code for performing automatic address switching; and
computer program code for performing the following steps:
setting said PPRC secondary device as a PAV-Secondary device for a PPRC primary device, said PPRC primary device being defined as a PAV-Base and being bound to a first set of PAV-Alias devices, said PPRC secondary device having defined a second set of PAV-Alias devices, said PPRC secondary device not being in an online state prior to said setting,
including said PAV-Secondary device in a PAV-Alias queue, said PAV-Alias queue being a PAV-Alias queue for said PAV-Base device,
updating a UCB lookup entry, said update indicating that said PPRC secondary device is defined as a PAV-Alias, and
unbinding said first set of PAV-Alias devices from said PAV-Base,
terminating a PPRC binding, said PPRC binding defining said PPRC primary device and said PPRC secondary device as a PPRC pair,
blocking I/O requests directed to said PAV-Base, and
allowing I/O requests to said PAV-Secondary.
19. An article of manufacture comprising:
a computer usable medium having computer readable program code for performing automatic peer to peer address switching, said computer readable program code further comprising computer readable program code for:
performing an initialization for at least one PPRC device pair, the initialization including:
setting a PPRC secondary device as a PAV-Secondary device for a PPRC primary device, said PPRC primary device being defined as a PAV-Base and being bound to a first set of PAV-Alias devices, said PPRC secondary device having defined a second set of PAV-Alias, said PPRC secondary device not being in an online state prior to said setting,
including said PAV-Secondary device in a PAV-Alias queue, said PAV-Alias queue being a PAV-Alias queue for said PAV-base device, and
updating a UCB lookup entry, said update indicating that said PPRC secondary device is defined as a PAV-Alias; and
performing an address switching for said PPRC device pair upon a determination that said address switching is desired, said address switching including:
unbinding said first set of PAV-Alias devices from said PAV-Base, and
terminating a PPRC binding, said PPRC binding defining said PPRC primary device and said PPRC secondary device as a PPRC pair,
blocking I/O requests directed to said PAV-Base, and
allowing I/O requests to said PAV-Secondary.
20. A article of manufacture as in claim 19 wherein said initialization is performed concurrently for multiple device pairs.
21. A article of manufacture as in claim 19 wherein said address switching is performed concurrently for multiple device pairs.
US10/134,414 2002-04-29 2002-04-29 System and method for automatic dynamic address switching Expired - Lifetime US6973586B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/134,414 US6973586B2 (en) 2002-04-29 2002-04-29 System and method for automatic dynamic address switching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/134,414 US6973586B2 (en) 2002-04-29 2002-04-29 System and method for automatic dynamic address switching

Publications (2)

Publication Number Publication Date
US20030204773A1 US20030204773A1 (en) 2003-10-30
US6973586B2 true US6973586B2 (en) 2005-12-06

Family

ID=29249225

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/134,414 Expired - Lifetime US6973586B2 (en) 2002-04-29 2002-04-29 System and method for automatic dynamic address switching

Country Status (1)

Country Link
US (1) US6973586B2 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050097391A1 (en) * 2003-10-20 2005-05-05 International Business Machines Corporation Method, system, and article of manufacture for data replication
US20060123273A1 (en) * 2004-11-15 2006-06-08 Kalos Matthew J Reassigning storage volumes from a failed processing system to a surviving processing system
US20060143711A1 (en) * 2004-12-01 2006-06-29 Yih Huang SCIT-DNS: critical infrastructure protection through secure DNS server dynamic updates
US20070070535A1 (en) * 2005-09-27 2007-03-29 Fujitsu Limited Storage system and component replacement processing method thereof
US20080104347A1 (en) * 2006-10-30 2008-05-01 Takashige Iwamura Information system and data transfer method of information system
US20080104346A1 (en) * 2006-10-30 2008-05-01 Yasuo Watanabe Information system and data transfer method
US20080104354A1 (en) * 2006-10-31 2008-05-01 International Business Machines Corporation Dynamic Operation Mode Transition of a Storage Subsystem
US20080104443A1 (en) * 2006-10-30 2008-05-01 Hiroaki Akutsu Information system, data transfer method and data protection method
US20090013014A1 (en) * 2003-06-18 2009-01-08 International Business Machines Corporation Method, system, and article of manufacture for mirroring data at storage locations
US20090182996A1 (en) * 2008-01-14 2009-07-16 International Business Machines Corporation Methods and Computer Program Products for Swapping Synchronous Replication Secondaries from a Subchannel Set Other Than Zero to Subchannel Set Zero Using Dynamic I/O
US20090193292A1 (en) * 2008-01-25 2009-07-30 International Business Machines Corporation Methods And Computer Program Products For Defing Synchronous Replication Devices In A Subchannel Set Other Than Subchannel Set Zero
US20100023647A1 (en) * 2008-07-28 2010-01-28 International Business Machines Corporation Swapping pprc secondaries from a subchannel set other than zero to subchannel set zero using control block field manipulation
US20100095066A1 (en) * 2008-09-23 2010-04-15 1060 Research Limited Method for caching resource representations in a contextual address space
US20110239040A1 (en) * 2010-03-23 2011-09-29 International Business Machines Corporation Parallel Multiplex Storage Systems
US8966211B1 (en) * 2011-12-19 2015-02-24 Emc Corporation Techniques for dynamic binding of device identifiers to data storage devices
CN105278522A (en) * 2015-10-16 2016-01-27 浪潮(北京)电子信息产业有限公司 Remote replication method and system
US9262092B2 (en) 2014-01-30 2016-02-16 International Business Machines Corporation Management of extent checking in a storage controller during copy services operations
US10216641B2 (en) 2017-01-13 2019-02-26 International Business Systems Corporation Managing and sharing alias devices across logical control units

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8782654B2 (en) 2004-03-13 2014-07-15 Adaptive Computing Enterprises, Inc. Co-allocating a reservation spanning different compute resources types
US20070266388A1 (en) 2004-06-18 2007-11-15 Cluster Resources, Inc. System and method for providing advanced reservations in a compute environment
US8176490B1 (en) 2004-08-20 2012-05-08 Adaptive Computing Enterprises, Inc. System and method of interfacing a workload manager and scheduler with an identity manager
CA2586763C (en) 2004-11-08 2013-12-17 Cluster Resources, Inc. System and method of providing system jobs within a compute environment
US20060161752A1 (en) * 2005-01-18 2006-07-20 Burkey Todd R Method, apparatus and program storage device for providing adaptive, attribute driven, closed-loop storage management configuration and control
US7941602B2 (en) * 2005-02-10 2011-05-10 Xiotech Corporation Method, apparatus and program storage device for providing geographically isolated failover using instant RAID swapping in mirrored virtual disks
US8863143B2 (en) 2006-03-16 2014-10-14 Adaptive Computing Enterprises, Inc. System and method for managing a hybrid compute environment
US9075657B2 (en) 2005-04-07 2015-07-07 Adaptive Computing Enterprises, Inc. On-demand access to compute resources
US9231886B2 (en) 2005-03-16 2016-01-05 Adaptive Computing Enterprises, Inc. Simple integration of an on-demand compute environment
US9225663B2 (en) 2005-03-16 2015-12-29 Adaptive Computing Enterprises, Inc. System and method providing a virtual private cluster
CA2601384A1 (en) 2005-03-16 2006-10-26 Cluster Resources, Inc. Automatic workload transfer to an on-demand center
US20060218360A1 (en) * 2005-03-22 2006-09-28 Burkey Todd R Method, apparatus and program storage device for providing an optimized read methodology for synchronously mirrored virtual disk pairs
US8041773B2 (en) 2007-09-24 2011-10-18 The Research Foundation Of State University Of New York Automatic clustering for self-organizing grids
US7971013B2 (en) * 2008-04-30 2011-06-28 Xiotech Corporation Compensating for write speed differences between mirroring storage devices by striping
US20100011176A1 (en) * 2008-07-11 2010-01-14 Burkey Todd R Performance of binary bulk IO operations on virtual disks by interleaving
US20100011371A1 (en) * 2008-07-11 2010-01-14 Burkey Todd R Performance of unary bulk IO operations on virtual disks by interleaving
US10877695B2 (en) 2009-10-30 2020-12-29 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US11720290B2 (en) 2009-10-30 2023-08-08 Iii Holdings 2, Llc Memcached server functionality in a cluster of data processing nodes
US8423822B2 (en) 2011-03-23 2013-04-16 Hitachi, Ltd. Storage system and method of controlling the same
US11099741B1 (en) * 2017-10-31 2021-08-24 EMC IP Holding Company LLC Parallel access volume I/O processing with intelligent alias selection across logical control units

Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4837680A (en) 1987-08-28 1989-06-06 International Business Machines Corporation Controlling asynchronously operating peripherals
US4894828A (en) 1987-12-22 1990-01-16 Amdahl Corporation Multiple sup swap mechanism
US5289589A (en) 1990-09-10 1994-02-22 International Business Machines Corporation Automated storage library having redundant SCSI bus system
US5327531A (en) 1992-09-21 1994-07-05 International Business Machines Corp. Data processing system including corrupt flash ROM recovery
US5720029A (en) 1995-07-25 1998-02-17 International Business Machines Corporation Asynchronously shadowing record updates in a remote copy session using track arrays
US5828847A (en) 1996-04-19 1998-10-27 Storage Technology Corporation Dynamic server switching for maximum server availability and load balancing
US5870537A (en) 1996-03-13 1999-02-09 International Business Machines Corporation Concurrent switch to shadowed device for storage controller and device errors
US5966301A (en) 1997-06-13 1999-10-12 Allen-Bradley Company, Llc Redundant processor controller providing upgrade recovery
US6108300A (en) * 1997-05-02 2000-08-22 Cisco Technology, Inc Method and apparatus for transparently providing a failover network device
US6145066A (en) 1997-11-14 2000-11-07 Amdahl Corporation Computer system with transparent data migration between storage volumes
US6167459A (en) * 1998-10-07 2000-12-26 International Business Machines Corporation System for reassigning alias addresses to an input/output device
US6199074B1 (en) 1997-10-09 2001-03-06 International Business Machines Corporation Database backup system ensuring consistency between primary and mirrored backup database copies despite backup interruption
US6240467B1 (en) * 1998-10-07 2001-05-29 International Business Machines Corporation Input/output operation request handling in a multi-host system
US6292905B1 (en) * 1997-05-13 2001-09-18 Micron Technology, Inc. Method for providing a fault tolerant network using distributed server processes to remap clustered network resources to other servers during server failure
US6442709B1 (en) * 1999-02-09 2002-08-27 International Business Machines Corporation System and method for simulating disaster situations on peer to peer remote copy machines
US6499112B1 (en) * 2000-03-28 2002-12-24 Storage Technology Corporation Automatic stand alone recovery for peer to peer remote copy (PPRC) operations
US20030018813A1 (en) * 2001-01-17 2003-01-23 Antes Mark L. Methods, systems and computer program products for providing failure recovery of network secure communications in a cluster computing environment
US6578158B1 (en) * 1999-10-28 2003-06-10 International Business Machines Corporation Method and apparatus for providing a raid controller having transparent failover and failback
US20030188233A1 (en) * 2002-03-28 2003-10-02 Clark Lubbers System and method for automatic site failover in a storage area network
US6888792B2 (en) * 2000-12-07 2005-05-03 Intel Corporation Technique to provide automatic failover for channel-based communications

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4837680A (en) 1987-08-28 1989-06-06 International Business Machines Corporation Controlling asynchronously operating peripherals
US4894828A (en) 1987-12-22 1990-01-16 Amdahl Corporation Multiple sup swap mechanism
US5289589A (en) 1990-09-10 1994-02-22 International Business Machines Corporation Automated storage library having redundant SCSI bus system
US5327531A (en) 1992-09-21 1994-07-05 International Business Machines Corp. Data processing system including corrupt flash ROM recovery
US5720029A (en) 1995-07-25 1998-02-17 International Business Machines Corporation Asynchronously shadowing record updates in a remote copy session using track arrays
US5870537A (en) 1996-03-13 1999-02-09 International Business Machines Corporation Concurrent switch to shadowed device for storage controller and device errors
US5828847A (en) 1996-04-19 1998-10-27 Storage Technology Corporation Dynamic server switching for maximum server availability and load balancing
US6108300A (en) * 1997-05-02 2000-08-22 Cisco Technology, Inc Method and apparatus for transparently providing a failover network device
US6292905B1 (en) * 1997-05-13 2001-09-18 Micron Technology, Inc. Method for providing a fault tolerant network using distributed server processes to remap clustered network resources to other servers during server failure
US5966301A (en) 1997-06-13 1999-10-12 Allen-Bradley Company, Llc Redundant processor controller providing upgrade recovery
US6199074B1 (en) 1997-10-09 2001-03-06 International Business Machines Corporation Database backup system ensuring consistency between primary and mirrored backup database copies despite backup interruption
US6145066A (en) 1997-11-14 2000-11-07 Amdahl Corporation Computer system with transparent data migration between storage volumes
US6167459A (en) * 1998-10-07 2000-12-26 International Business Machines Corporation System for reassigning alias addresses to an input/output device
US6240467B1 (en) * 1998-10-07 2001-05-29 International Business Machines Corporation Input/output operation request handling in a multi-host system
US6442709B1 (en) * 1999-02-09 2002-08-27 International Business Machines Corporation System and method for simulating disaster situations on peer to peer remote copy machines
US6578158B1 (en) * 1999-10-28 2003-06-10 International Business Machines Corporation Method and apparatus for providing a raid controller having transparent failover and failback
US6499112B1 (en) * 2000-03-28 2002-12-24 Storage Technology Corporation Automatic stand alone recovery for peer to peer remote copy (PPRC) operations
US6888792B2 (en) * 2000-12-07 2005-05-03 Intel Corporation Technique to provide automatic failover for channel-based communications
US20030018813A1 (en) * 2001-01-17 2003-01-23 Antes Mark L. Methods, systems and computer program products for providing failure recovery of network secure communications in a cluster computing environment
US20030188233A1 (en) * 2002-03-28 2003-10-02 Clark Lubbers System and method for automatic site failover in a storage area network

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8209282B2 (en) 2003-06-18 2012-06-26 International Business Machines Corporation Method, system, and article of manufacture for mirroring data at storage locations
US20090019096A1 (en) * 2003-06-18 2009-01-15 International Business Machines Corporation System and article of manufacture for mirroring data at storage locations
US8027952B2 (en) 2003-06-18 2011-09-27 International Business Machines Corporation System and article of manufacture for mirroring data at storage locations
US20090013014A1 (en) * 2003-06-18 2009-01-08 International Business Machines Corporation Method, system, and article of manufacture for mirroring data at storage locations
US20050097391A1 (en) * 2003-10-20 2005-05-05 International Business Machines Corporation Method, system, and article of manufacture for data replication
US7376859B2 (en) * 2003-10-20 2008-05-20 International Business Machines Corporation Method, system, and article of manufacture for data replication
US7797577B2 (en) 2004-11-15 2010-09-14 International Business Machines Corporation Reassigning storage volumes from a failed processing system to a surviving processing system
US20060123273A1 (en) * 2004-11-15 2006-06-08 Kalos Matthew J Reassigning storage volumes from a failed processing system to a surviving processing system
US7437608B2 (en) * 2004-11-15 2008-10-14 International Business Machines Corporation Reassigning storage volumes from a failed processing system to a surviving processing system
US20060143711A1 (en) * 2004-12-01 2006-06-29 Yih Huang SCIT-DNS: critical infrastructure protection through secure DNS server dynamic updates
US7680955B2 (en) * 2004-12-01 2010-03-16 George Mason Intellectual Properties, Inc. SCIT-DNS: critical infrastructure protection through secure DNS server dynamic updates
US20070070535A1 (en) * 2005-09-27 2007-03-29 Fujitsu Limited Storage system and component replacement processing method thereof
US8386839B2 (en) 2006-10-30 2013-02-26 Hitachi, Ltd. Information system and data transfer method
US8281179B2 (en) 2006-10-30 2012-10-02 Hitachi, Ltd. Information system, data transfer method and data protection method
US20080104346A1 (en) * 2006-10-30 2008-05-01 Yasuo Watanabe Information system and data transfer method
US7802131B2 (en) 2006-10-30 2010-09-21 Hitachi, Ltd. Information system and data transfer method
US20080104443A1 (en) * 2006-10-30 2008-05-01 Hiroaki Akutsu Information system, data transfer method and data protection method
US20100205479A1 (en) * 2006-10-30 2010-08-12 Hiroaki Akutsu Information system, data transfer method and data protection method
US7739540B2 (en) 2006-10-30 2010-06-15 Hitachi, Ltd. Information system, data transfer method and data protection method
US8832397B2 (en) 2006-10-30 2014-09-09 Hitachi, Ltd. Information system and data transfer method of information system
US8090979B2 (en) 2006-10-30 2012-01-03 Hitachi, Ltd. Information system and data transfer method
US8595453B2 (en) 2006-10-30 2013-11-26 Hitachi, Ltd. Information system and data transfer method of information system
US20080104347A1 (en) * 2006-10-30 2008-05-01 Takashige Iwamura Information system and data transfer method of information system
US20100313068A1 (en) * 2006-10-30 2010-12-09 Yasuo Watanabe Information system and data transfer method
US7925914B2 (en) 2006-10-30 2011-04-12 Hitachi, Ltd. Information system, data transfer method and data protection method
US20110154102A1 (en) * 2006-10-30 2011-06-23 Hiroaki Akutsu Information system, data transfer method and data protection method
US7657715B2 (en) 2006-10-31 2010-02-02 International Business Machines Corporation Dynamic operation mode transition of a storage subsystem
US20080104354A1 (en) * 2006-10-31 2008-05-01 International Business Machines Corporation Dynamic Operation Mode Transition of a Storage Subsystem
US8307129B2 (en) 2008-01-14 2012-11-06 International Business Machines Corporation Methods and computer program products for swapping synchronous replication secondaries from a subchannel set other than zero to subchannel set zero using dynamic I/O
US20090182996A1 (en) * 2008-01-14 2009-07-16 International Business Machines Corporation Methods and Computer Program Products for Swapping Synchronous Replication Secondaries from a Subchannel Set Other Than Zero to Subchannel Set Zero Using Dynamic I/O
US20090193292A1 (en) * 2008-01-25 2009-07-30 International Business Machines Corporation Methods And Computer Program Products For Defing Synchronous Replication Devices In A Subchannel Set Other Than Subchannel Set Zero
US7761610B2 (en) * 2008-01-25 2010-07-20 International Business Machines Corporation Methods and computer program products for defining synchronous replication devices in a subchannel set other than subchannel set zero
US20100023647A1 (en) * 2008-07-28 2010-01-28 International Business Machines Corporation Swapping pprc secondaries from a subchannel set other than zero to subchannel set zero using control block field manipulation
US8516173B2 (en) 2008-07-28 2013-08-20 International Business Machines Corporation Swapping PPRC secondaries from a subchannel set other than zero to subchannel set zero using control block field manipulation
US20100095066A1 (en) * 2008-09-23 2010-04-15 1060 Research Limited Method for caching resource representations in a contextual address space
US8135982B2 (en) 2010-03-23 2012-03-13 International Business Machines Corporation Parallel multiplex storage systems
US20110239040A1 (en) * 2010-03-23 2011-09-29 International Business Machines Corporation Parallel Multiplex Storage Systems
US8966211B1 (en) * 2011-12-19 2015-02-24 Emc Corporation Techniques for dynamic binding of device identifiers to data storage devices
US9785358B2 (en) 2014-01-30 2017-10-10 International Business Machines Corporation Management of extent checking in a storage controller during copy services operations
US9262092B2 (en) 2014-01-30 2016-02-16 International Business Machines Corporation Management of extent checking in a storage controller during copy services operations
CN105278522B (en) * 2015-10-16 2018-09-14 浪潮(北京)电子信息产业有限公司 A kind of remote copy method and system
CN105278522A (en) * 2015-10-16 2016-01-27 浪潮(北京)电子信息产业有限公司 Remote replication method and system
US10216641B2 (en) 2017-01-13 2019-02-26 International Business Systems Corporation Managing and sharing alias devices across logical control units

Also Published As

Publication number Publication date
US20030204773A1 (en) 2003-10-30

Similar Documents

Publication Publication Date Title
US6973586B2 (en) System and method for automatic dynamic address switching
US7085956B2 (en) System and method for concurrent logical device swapping
US7467168B2 (en) Method for mirroring data at storage locations
US7043665B2 (en) Method, system, and program for handling a failover to a remote storage location
US7117386B2 (en) SAR restart and going home procedures
US7657718B1 (en) Storage automated replication processing
US8914671B2 (en) Multiple hyperswap replication sessions
US8166241B2 (en) Method of improving efficiency of capacity of volume used for copy function and apparatus thereof
US9141639B2 (en) Bitmap selection for remote copying of updates
KR100604242B1 (en) File server storage arrangement
KR100575497B1 (en) Fault tolerant computer system
US8060478B2 (en) Storage system and method of changing monitoring condition thereof
JP4422519B2 (en) Information processing system
US20090006794A1 (en) Asynchronous remote copy system and control method for the same
CN110998538A (en) Asynchronous local and remote generation of consistent point-in-time snapshot copies in a consistency group
US10719244B2 (en) Multi-mode data replication for data loss risk reduction
US20110167044A1 (en) Computing system and backup method using the same
US10884872B2 (en) Device reservation state preservation in data mirroring
US10613946B2 (en) Device reservation management for overcoming communication path disruptions
US10248511B2 (en) Storage system having multiple local and remote volumes and multiple journal volumes using dummy journals for sequence control

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PETERSEN, DAVID B.;STAUBI, JOHN A.;YUDENFRIEND, HARRY H.;REEL/FRAME:012862/0517

Effective date: 20020426

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

FPAY Fee payment

Year of fee payment: 4

REMI Maintenance fee reminder mailed
FPAY Fee payment

Year of fee payment: 8

SULP Surcharge for late payment

Year of fee payment: 7

FPAY Fee payment

Year of fee payment: 12