US20060288049A1 - Method, System and computer Program for Concurrent File Update - Google Patents
Method, System and computer Program for Concurrent File Update Download PDFInfo
- Publication number
- US20060288049A1 US20060288049A1 US11/425,230 US42523006A US2006288049A1 US 20060288049 A1 US20060288049 A1 US 20060288049A1 US 42523006 A US42523006 A US 42523006A US 2006288049 A1 US2006288049 A1 US 2006288049A1
- Authority
- US
- United States
- Prior art keywords
- update
- shared resource
- applying
- handler
- resource
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims description 79
- 238000004590 computer program Methods 0.000 title claims description 5
- 230000004044 response Effects 0.000 claims abstract description 12
- 230000008569 process Effects 0.000 claims description 60
- 238000012545 processing Methods 0.000 claims description 13
- 230000015654 memory Effects 0.000 claims description 11
- 230000007717 exclusion Effects 0.000 claims description 5
- 238000004519 manufacturing process Methods 0.000 abstract description 13
- 230000006870 function Effects 0.000 description 4
- 230000003936 working memory Effects 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 239000003795 chemical substances by application Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000001154 acute effect Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000007728 cost analysis Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
- G06F9/526—Mutual exclusion algorithms
Definitions
- the present invention relates to the data processing system. More specifically, the present invention relates to the updating of shared resources in a data processing system.
- Shared resources are routinely used in modern data processing systems; a typical problem relating to every shared resource is the synchronisation of multiple processes that compete or interfere for its access.
- a critical situation occurs when a first process working continuously on a shared resource has an exclusive write access thereto; in this condition, a contention arises whenever a second process needs to update the same shared resource.
- the only available solution is of stopping the first process for allowing the second process to update the shared resource; once the operation completes, the first process can restart working on the shared resource (as updated by the second process). However, the first process is unable to work for the whole time required by the second process to update the shared resource.
- a typical example is that of a scheduler, which is used to control the execution of jobs in a data processing system.
- the jobs are executed according to a workload plan (defining their correct sequence).
- the plan is managed by a handler, which submits the jobs and maintains information about their current state up-to-date in the plan.
- the handler must have exclusive write access to a control file storing the plan.
- the plan is instead generated for a predefined production period by a planner. This operation requires the addition and the removal of thousands of jobs to/from the control file; therefore, the generation of a (new) plan is not possible while the (previous) plan is in execution.
- an aspect of the invention proposes a method for updating a shared resource in a data processing system; this operation is performed by a first process and a second process in mutual exclusion.
- the method starts with the first process that accesses the shared resource for applying first updates.
- the second process notifies a request to the first process; the request is for applying one or more second updates to the shared resource.
- a copy of the shared resource is made in response to the request.
- the method continues by logging an indication of each first update.
- the second updates are applied to a selected resource between the shared resource and its copy.
- each logged first update is applied to the selected resource.
- the method ends by causing the first process to access the selected resource.
- the first and the second updates are commutative.
- the first updates are also applied to the unselected shared resource.
- the first updates are event-driven (so that only an indication of those events is logged).
- the application of the second updates takes a time longer than the one of each first update.
- the proposed solution typically finds application in a scheduler (for updating a plan).
- the processes are a handler and a planner of the scheduler.
- the handler updates execution information of each pending work unit and the planner inserts new work units or removes completed work units.
- a further aspect of the present invention proposes a computer program for performing the above-described method.
- Another aspect of the present invention proposes a corresponding system.
- FIG. 1 a is a schematic block diagram of a data processing system in which the solution according to an embodiment of the invention is applicable;
- FIG. 1 b shows the functional blocks of an exemplary scheduling server of the system
- FIG. 2 depicts the main software components that can be used for implementing the solution according to an embodiment of the invention.
- FIGS. 3 a - 3 i show the sequence of operations executed during a process implementing the solution according to an embodiment of the invention.
- the system 100 has a distributed architecture based on a network 105 (typically consisting of the Internet).
- a central scheduling server 110 is used to automate, monitor and control the execution of work units through the entire system 100 .
- the work units consist of non-interactive jobs (for example, payroll programs, cost analysis applications, and the like), which are to be executed on multiple workstations 115 .
- the scheduling server 110 and the workstations 115 communicate through the network 105 .
- FIG. 1 b An exemplary implementation of the scheduling server 110 is shown in FIG. 1 b.
- the scheduling server 110 is formed by several units that are connected in parallel to a system bus 153 .
- one or more microprocessors ( ⁇ P) 156 control operation of the scheduling server 110 ;
- a RAM 159 is directly used as a working memory by the microprocessors 156 , and
- a ROM 162 stores basic code for a bootstrap of the scheduling server 110 .
- Several peripheral units are clustered around a local bus 165 (by means of respective interfaces).
- a mass memory consists of one or more hard-disks 168 and a drive 171 for reading CD-ROMs 174 .
- the scheduling server 110 includes input units 177 (for example, a keyboard and a mouse), and output units 180 (for example, a monitor and a printer).
- An adapter 183 is used to connect the scheduling server 110 to the network.
- a bridge unit 186 interfaces the system bus 153 with the local bus 165 .
- Each microprocessor 156 and the bridge unit 186 can operate as master agents requesting an access to the system bus 153 for transmitting information.
- An arbiter 189 manages the granting of the access with mutual exclusion to the system bus 153 .
- the information is typically stored on the hard-disk and loaded (at least partially) into the working memory of the scheduling server 110 when the programs are running, together with an operating system and other application programs (not shown in the figure).
- the programs are initially installed onto the hard disk, for example, from CD-ROM.
- the scheduling server 110 runs a corresponding application 200 (for example, the “IBM Tivoli Workload Scheduler or TWS” by IBM Corporation).
- the scheduler 200 includes a controller 205 (such as the “Composer” program of the above-mentioned TWS), which is used to manage a workload database 210 .
- the workload database 210 contains the definition of the whole scheduling environment. Particularly, the workload database 210 stores a representation of the topology of the system (i.e., the workstations with their connections) and of the hardware/software resources that are available for the execution of the jobs.
- the workload database 210 also includes a descriptor of each job (written in a suitable control language, for example, XML-based). The job descriptor specifies the programs to be invoked, their arguments and environmental variables.
- the job descriptor indicates any dependencies of the job (i.e., conditions that must be met before the job can start); exemplary dependencies are time constraints (such as a specific day, an earliest time or a latest time for its submission, or a maximum allowable duration), sequence constraints (such as the successful completion of other jobs), or enabling constraints (such as the entering of a response to a prompt by an operator).
- the job descriptor also specifies the (physical or logical) resources that are required by the job; those resources can be seen as a particular type of dependency, which conditions the execution of the job to their availability.
- each job stream consists of an ordered sequence of jobs to be run as a single work unit respecting predefined dependencies (for example, jobs to be executed on the same day or using common resources).
- predefined dependencies for example, jobs to be executed on the same day or using common resources.
- job will be used from now on to denote either a single job or a job stream (unless otherwise specified).
- the workload database 210 also stores statistics information relating to the execution of the jobs (such as a log of their duration from which a corresponding estimated duration may be inferred).
- a planner 215 (such as the “Master Domain Manager” of the TWS) creates a workload plan, which consists of a batch of jobs (together with their dependencies) scheduled for execution on a specific production period (typically, one day); the plan is stored into a corresponding control file 220 (such as the “Symphony” of the TWS). A new plan is generally created automatically before every production day.
- the planner 215 processes the information available in the workload database 210 so as to select the jobs to be run and to arrange them in the desired sequence (according to their specifications).
- the planner 215 updates the control file by adding the jobs so obtained (for the next production day) and by removing the preexisting jobs (of the previous production day) that have been completed; it should be noted that this operation involves a massive change to the control file 220 (with the addition and the removal of thousands of jobs).
- the jobs of the previous production day that did not complete successfully or that are still running or waiting to be run can be maintained in the plan (for execution during the next production day).
- a handler 225 (such as the “Batchman” process in the TWS) starts the plan at the beginning of every production day (for example, at 6:00 AM). After the plan has started, the handler 225 takes exclusive write access to the control file 220 . Whenever a new job of the plan must be submitted for execution, the handler 225 notifies an executor 230 (such as the “Jobman” process in the TWS). The executor 230 directly controls the launching and the tracking of the job (for example, interfacing with corresponding agents running on the different workstations).
- the executor 230 returns feedback information about the execution of the job to the handler 225 (for example, whether the job has been completed successfully, its actual duration, and the like); the handler 225 enters this information into the control file 220 .
- the control file 220 is continuously updated so as to provide a real-time picture of the current state of all the jobs of the plan.
- the planner 215 accesses the control file 220 for updating the statistics information relating to the executed jobs in the workload database 210 .
- the handler 225 must have exclusive write access to the control file 220 during the execution of every plan; the operation of the handler 225 is very critical, so that its stops must be limited as much as possible.
- the writing of any new plan into the control file 220 involves a massive update that typically takes a quite long time (of the order of some tens of minutes). Therefore, it is not possible to perform the operation while the previous plan is in execution. As a result, the scheduler 200 is not able to work uninterruptedly 24 hours per day.
- FIGS. 3 a - 3 i A solution that allows writing a new plan into the control file 220 by the planner 215 without substantially stopping operation of the handler 225 is illustrated in the FIGS. 3 a - 3 i.
- the handler 225 accesses the control file 220 for updating its content (bold arrow 305 ); in this phase, the control file 220 is owned exclusively by the handler 225 (with any other process that can read it at most).
- the planner 215 wishes to write a new plan into the control file 220 . To this end, the planner 215 notifies the handler 225 accordingly (thin arrow 310 ).
- the process continues to FIG. 3 c, wherein the handler 225 stops working in response to the request received from the planner 215 .
- the handler 225 then makes a copy (bold arrow 315 ) of the control file 220 (denoted with 320 ).
- the handler 225 notifies the planner 215 as soon as the operation completes (thin arrow 325 ). It should be noted that the time required for copying the control file 220 is very short (of the order of a few seconds).
- the handler 225 can restart working (bold arrow 305 ) on the control file 220 ; however, an indication of all the updates performed on the control file 220 is saved (bold arrow 330 ) into a corresponding log 335 .
- the log 335 only stores the events that cause the handler 225 to act on the control file 220 (i.e., the feedback information returned by the executor when a job completes either successfully or in error).
- the planner 215 can start writing the new plan (bold arrow 340 ) into the copy of the control file 320 .
- the flow of activity continues to FIG. 3 f. As soon as the planner 215 completes the operation, it notifies the handler 225 accordingly (thin arrow 345 ).
- the handler 225 stops working again.
- the handler 225 then deletes the original control file 220 (cross 345 ).
- the handler 225 reapplies all the updates made to the (original) control file to the corresponding copy 320 .
- the handler 225 extracts (bold arrow 350 ) the events saved in the log 335 in succession; the update corresponding to each event is then applied (bold arrow 355 ) to the copy of the control file 320 .
- the time required for the operation is very short (of the order of a few seconds) since the frequency of completion of the jobs (defining the corresponding logged events) is relatively low.
- the handler 225 can then restart working (bold arrow 365 ) on the copy of the control file 320 , which then replaces its original version.
- the handler 225 only updates information relating to the state of the jobs that have not been completed yet (when the copy of the control file 320 is performed).
- the planner 215 only inserts new jobs or removes jobs that have already been completed at the same time.
- the handler 225 and the planner 215 act on subsets of the jobs (in the control file 220 ) that are disjoint. This ensures that the updates saved in the log 335 (while the planner is updating the copy of the control file 320 ) will always relate to jobs that are not affected by the operations performed by the planner 215 .
- each computer may have another structure or may include similar elements (such as cache memories temporarily storing the programs or parts thereof to reduce the accesses to the mass memory during execution); in any case, it is possible to replace the computer with any code execution entity (such as a PDA, a mobile phone, and the like).
- code execution entity such as a PDA, a mobile phone, and the like.
- the invention has equal applicability to equivalent schedulers (for example, having another architecture, working on a single computer, or used to control the execution of other work units such as interactive tasks).
- the proposed solution is used to update different memory structures (such as a table), or by other modules of the scheduler even performing different operations (such as the controller).
- the reference to the scheduler is not to be intended as a limitation.
- the solution of the invention can be used to update any shared resource by generic processes.
- the resource may be a database, a software application, or the configuration of a hardware device; likewise, the processes may be different software programs (even remote to each other), computers, subjects, and the like.
- the term update as used herein refers to any operation intended to modify the resource (for example, involving an addition, deletion, or change to any aspect relating to the resource or a part thereof).
- the concepts of the present invention are also applicable when the updates (applied by the two processes) are not commutative, but any difference resulting from their order of application to the shared resource is acceptable. For example, this is possible when the differences only relate to minor information that is not vital for the specific application, or when the first process (normally controlling the shared resource) overrides or disregards the differences introduced by the second process (during its massive update).
- the handler may log the updates only without actually changing the control file (for example, when every update is independent of the previous ones); in this case, these updates are applied only once after the planner completes the writing of the new plan.
- the solution of the invention lends itself to be applied even when the handler is not event-driven; in any case, it is always possible to log the actual updates that are applied to the control file by the handler (or any equivalent indication thereof).
- the program (which may be used to implement the invention) is structured in a different way, or if additional modules or functions are provided; likewise, the memory structures may be of other types, or may be replaced with equivalent entities (not necessarily consisting of physical storage media). Moreover, the proposed solution lends itself to be implemented with an equivalent method (for example, with similar or additional steps).
- the program may take any form suitable to be used by or in connection with any data processing system, such as external or resident software, firmware, or microcode (either in object code or in source code).
- the program may be provided on any computer-usable medium; the medium can be any element suitable to contain, store, communicate, propagate, or transfer the program.
- Examples of such medium are fixed disks (where the program can be pre-loaded), removable disks, tapes, cards, wires, fibers, wireless connections, networks, broadcast waves, and the like; for example, the medium may be of the electronic, magnetic, optical, electromagnetic, infrared, or semiconductor type.
- the solution according to the present invention lends itself to be carried out with a hardware structure (for example, integrated in a chip of semiconductor material), or with a combination of software and hardware.
Abstract
A solution is proposed in a scheduler for updating a control file (220) storing a workload plan (defining a sequence of jobs to be executed during a production day). A handler (225) has exclusive write access to the control file (for updating information relating to the state of the jobs during execution of the plan). A planner (215) that needs to write a new plan into the control file (for the next production day) notifies the handler. In response thereto, the handler makes a copy (320) of the control file for use by the planner and starts logging (335) any update that it performs on the (original) control file. Once the planner completes its work, the handler applies the logged updates to the copy of the control file (as updated by the planner). The handler can now restart working on the copy of the control file, which then replaces its original version.
Description
- The present invention relates to the data processing system. More specifically, the present invention relates to the updating of shared resources in a data processing system.
- Shared resources (such as files) are routinely used in modern data processing systems; a typical problem relating to every shared resource is the synchronisation of multiple processes that compete or interfere for its access.
- Particularly, a critical situation occurs when a first process working continuously on a shared resource has an exclusive write access thereto; in this condition, a contention arises whenever a second process needs to update the same shared resource.
- The only available solution is of stopping the first process for allowing the second process to update the shared resource; once the operation completes, the first process can restart working on the shared resource (as updated by the second process). However, the first process is unable to work for the whole time required by the second process to update the shared resource.
- The problem is particular acute when the first process performs a critical work and the second process requires a massive update of the shared resource. In this case, any stop of the first process should be limited as much as possible; therefore, the long time required to perform the massive update of the shared resource (by the second process) would be untenable. As a result, it is necessary to delay the update (for example, to a dead period of the first process). In any case, this drawback prevents the first process to work uninterruptedly.
- A typical example is that of a scheduler, which is used to control the execution of jobs in a data processing system. The jobs are executed according to a workload plan (defining their correct sequence). The plan is managed by a handler, which submits the jobs and maintains information about their current state up-to-date in the plan. For this purpose, the handler must have exclusive write access to a control file storing the plan. The plan is instead generated for a predefined production period by a planner. This operation requires the addition and the removal of thousands of jobs to/from the control file; therefore, the generation of a (new) plan is not possible while the (previous) plan is in execution.
- All of the above requires a dedicated time window before every production period for the writing of the corresponding plan into the control file. As a result, the scheduler is not able to work uninterruptedly 24 hours per day. This adversely affects the performance of the system; particularly, it is not possible to exploit the available hardware/software resources to their best.
- According to the present invention, the idea of logging the updates for their application is suggested.
- Particularly, an aspect of the invention proposes a method for updating a shared resource in a data processing system; this operation is performed by a first process and a second process in mutual exclusion. The method starts with the first process that accesses the shared resource for applying first updates. The second process notifies a request to the first process; the request is for applying one or more second updates to the shared resource. A copy of the shared resource is made in response to the request. The method continues by logging an indication of each first update. At the same time, the second updates are applied to a selected resource between the shared resource and its copy. In response to the completion of the application of the second updates, each logged first update is applied to the selected resource. The method ends by causing the first process to access the selected resource.
- In a preferred embodiment of the invention, the first and the second updates are commutative.
- Typically, the first updates are also applied to the unselected shared resource.
- As a further enhancement, the first updates are event-driven (so that only an indication of those events is logged).
- Preferably, the application of the second updates takes a time longer than the one of each first update.
- The proposed solution typically finds application in a scheduler (for updating a plan).
- In a specific embodiment of the invention, the processes are a handler and a planner of the scheduler.
- Advantageously, the handler updates execution information of each pending work unit and the planner inserts new work units or removes completed work units.
- A further aspect of the present invention proposes a computer program for performing the above-described method.
- Moreover, another aspect of the present invention proposes a corresponding system.
- The characterizing features of the present invention are set forth in the appended claims. The invention itself, however, as well as further features and the advantages thereof will be best understood by reference to the following detailed description, given purely by way of a nonrestrictive indication, to be read in conjunction with the accompanying drawings.
-
FIG. 1 a is a schematic block diagram of a data processing system in which the solution according to an embodiment of the invention is applicable; -
FIG. 1 b shows the functional blocks of an exemplary scheduling server of the system; -
FIG. 2 depicts the main software components that can be used for implementing the solution according to an embodiment of the invention; and -
FIGS. 3 a-3 i show the sequence of operations executed during a process implementing the solution according to an embodiment of the invention. - With reference in particular to Figure la, a schematic block diagram of a
data processing system 100 is illustrated. Thesystem 100 has a distributed architecture based on a network 105 (typically consisting of the Internet). Particularly, acentral scheduling server 110 is used to automate, monitor and control the execution of work units through theentire system 100. Typically, the work units consist of non-interactive jobs (for example, payroll programs, cost analysis applications, and the like), which are to be executed onmultiple workstations 115. For this purpose, thescheduling server 110 and theworkstations 115 communicate through thenetwork 105. - An exemplary implementation of the
scheduling server 110 is shown inFIG. 1 b. Particularly, thescheduling server 110 is formed by several units that are connected in parallel to asystem bus 153. In detail, one or more microprocessors (μP) 156 control operation of thescheduling server 110; aRAM 159 is directly used as a working memory by themicroprocessors 156, and aROM 162 stores basic code for a bootstrap of thescheduling server 110. Several peripheral units are clustered around a local bus 165 (by means of respective interfaces). Particularly, a mass memory consists of one or more hard-disks 168 and adrive 171 for reading CD-ROMs 174. Moreover, thescheduling server 110 includes input units 177 (for example, a keyboard and a mouse), and output units 180 (for example, a monitor and a printer). Anadapter 183 is used to connect thescheduling server 110 to the network. Abridge unit 186 interfaces thesystem bus 153 with thelocal bus 165. Eachmicroprocessor 156 and thebridge unit 186 can operate as master agents requesting an access to thesystem bus 153 for transmitting information. Anarbiter 189 manages the granting of the access with mutual exclusion to thesystem bus 153. - Moving now to
FIG. 2 , a partial content of the working memory of thescheduling server 110 in operation is illustrated. The information (programs and data) is typically stored on the hard-disk and loaded (at least partially) into the working memory of thescheduling server 110 when the programs are running, together with an operating system and other application programs (not shown in the figure). The programs are initially installed onto the hard disk, for example, from CD-ROM. - Particularly, the
scheduling server 110 runs a corresponding application 200 (for example, the “IBM Tivoli Workload Scheduler or TWS” by IBM Corporation). Thescheduler 200 includes a controller 205 (such as the “Composer” program of the above-mentioned TWS), which is used to manage aworkload database 210. - The
workload database 210 contains the definition of the whole scheduling environment. Particularly, theworkload database 210 stores a representation of the topology of the system (i.e., the workstations with their connections) and of the hardware/software resources that are available for the execution of the jobs. Theworkload database 210 also includes a descriptor of each job (written in a suitable control language, for example, XML-based). The job descriptor specifies the programs to be invoked, their arguments and environmental variables. Moreover, the job descriptor indicates any dependencies of the job (i.e., conditions that must be met before the job can start); exemplary dependencies are time constraints (such as a specific day, an earliest time or a latest time for its submission, or a maximum allowable duration), sequence constraints (such as the successful completion of other jobs), or enabling constraints (such as the entering of a response to a prompt by an operator). The job descriptor also specifies the (physical or logical) resources that are required by the job; those resources can be seen as a particular type of dependency, which conditions the execution of the job to their availability. Generally, the jobs are organized into streams; each job stream consists of an ordered sequence of jobs to be run as a single work unit respecting predefined dependencies (for example, jobs to be executed on the same day or using common resources). For the sake of simplicity, the term job will be used from now on to denote either a single job or a job stream (unless otherwise specified). Theworkload database 210 also stores statistics information relating to the execution of the jobs (such as a log of their duration from which a corresponding estimated duration may be inferred). - A planner 215 (such as the “Master Domain Manager” of the TWS) creates a workload plan, which consists of a batch of jobs (together with their dependencies) scheduled for execution on a specific production period (typically, one day); the plan is stored into a corresponding control file 220 (such as the “Symphony” of the TWS). A new plan is generally created automatically before every production day. For this purpose, the
planner 215 processes the information available in theworkload database 210 so as to select the jobs to be run and to arrange them in the desired sequence (according to their specifications). Typically, theplanner 215 updates the control file by adding the jobs so obtained (for the next production day) and by removing the preexisting jobs (of the previous production day) that have been completed; it should be noted that this operation involves a massive change to the control file 220 (with the addition and the removal of thousands of jobs). In addition, the jobs of the previous production day that did not complete successfully or that are still running or waiting to be run can be maintained in the plan (for execution during the next production day). - A handler 225 (such as the “Batchman” process in the TWS) starts the plan at the beginning of every production day (for example, at 6:00 AM). After the plan has started, the
handler 225 takes exclusive write access to thecontrol file 220. Whenever a new job of the plan must be submitted for execution, thehandler 225 notifies an executor 230 (such as the “Jobman” process in the TWS). Theexecutor 230 directly controls the launching and the tracking of the job (for example, interfacing with corresponding agents running on the different workstations). Theexecutor 230 returns feedback information about the execution of the job to the handler 225 (for example, whether the job has been completed successfully, its actual duration, and the like); thehandler 225 enters this information into thecontrol file 220. In this way, thecontrol file 220 is continuously updated so as to provide a real-time picture of the current state of all the jobs of the plan. At the end of the production day, theplanner 215 accesses thecontrol file 220 for updating the statistics information relating to the executed jobs in theworkload database 210. - In the above-described scenario, it is necessary to have a dedicated time window before every production day for allowing the
planner 215 to write the corresponding new plan into thecontrol file 220. Indeed, thehandler 225 must have exclusive write access to thecontrol file 220 during the execution of every plan; the operation of thehandler 225 is very critical, so that its stops must be limited as much as possible. On the other hand, as pointed out in the foregoing, the writing of any new plan into the control file 220 (by the planner 215) involves a massive update that typically takes a quite long time (of the order of some tens of minutes). Therefore, it is not possible to perform the operation while the previous plan is in execution. As a result, thescheduler 200 is not able to work uninterruptedly 24 hours per day. - A solution that allows writing a new plan into the
control file 220 by theplanner 215 without substantially stopping operation of thehandler 225 is illustrated in theFIGS. 3 a-3 i. - Starting from
FIG. 3 a, thehandler 225 accesses thecontrol file 220 for updating its content (bold arrow 305); in this phase, thecontrol file 220 is owned exclusively by the handler 225 (with any other process that can read it at most). - Passing to
FIG. 3 b, let us assume now that theplanner 215 wishes to write a new plan into thecontrol file 220. To this end, theplanner 215 notifies thehandler 225 accordingly (thin arrow 310). - The process continues to
FIG. 3 c, wherein thehandler 225 stops working in response to the request received from theplanner 215. Thehandler 225 then makes a copy (bold arrow 315) of the control file 220 (denoted with 320). - As shown in
FIG. 3 d, thehandler 225 notifies theplanner 215 as soon as the operation completes (thin arrow 325). It should be noted that the time required for copying thecontrol file 220 is very short (of the order of a few seconds). - With reference now to
FIG. 3 e, thehandler 225 can restart working (bold arrow 305) on thecontrol file 220; however, an indication of all the updates performed on thecontrol file 220 is saved (bold arrow 330) into acorresponding log 335. Preferably, thelog 335 only stores the events that cause thehandler 225 to act on the control file 220 (i.e., the feedback information returned by the executor when a job completes either successfully or in error). At the same time, theplanner 215 can start writing the new plan (bold arrow 340) into the copy of thecontrol file 320. - The flow of activity continues to
FIG. 3 f. As soon as theplanner 215 completes the operation, it notifies thehandler 225 accordingly (thin arrow 345). - In response thereto, as shown in
FIG. 3 g, thehandler 225 stops working again. Thehandler 225 then deletes the original control file 220 (cross 345). - Continuing to
FIG. 3 h, thehandler 225 reapplies all the updates made to the (original) control file to thecorresponding copy 320. For this purpose, thehandler 225 extracts (bold arrow 350) the events saved in thelog 335 in succession; the update corresponding to each event is then applied (bold arrow 355) to the copy of thecontrol file 320. In this case as well, the time required for the operation is very short (of the order of a few seconds) since the frequency of completion of the jobs (defining the corresponding logged events) is relatively low. - The process ends at
FIG. 3 i, wherein thehandler 225 deletes the log 320 (cross 360). Thehandler 225 can then restart working (bold arrow 365) on the copy of thecontrol file 320, which then replaces its original version. - It is pointed out that the content of the control file at the end of the process is exactly the same that would have been obtained if the
handler 225 had been stopped for allowing theplanner 215 to write the new plan. Indeed, thehandler 225 only updates information relating to the state of the jobs that have not been completed yet (when the copy of thecontrol file 320 is performed). Conversely, theplanner 215 only inserts new jobs or removes jobs that have already been completed at the same time. In other words, thehandler 225 and theplanner 215 act on subsets of the jobs (in the control file 220) that are disjoint. This ensures that the updates saved in the log 335 (while the planner is updating the copy of the control file 320) will always relate to jobs that are not affected by the operations performed by theplanner 215. - More formally, let us denote with “c” the content of the control file. The updates applied by the
handler 225 are indicated by a function H( ), while the updates applied by theplanner 215 are indicated by another function P( ). The above-mentioned condition is satisfied when the functions H( ) and P( ) are commutative, that is:
P( )°H( )=H( )°P( ), or
H(P(c))=P(H(c)). - The above-described solution only requires very short stops of the handler for copying the control file and for applying the logged updates (irrespective of the time for writing the new plan).
- In this way, the new plan can be written into the control file while the previous plan is in execution. As a result, it is now possible to have the scheduler work continuously 24 hours per day.
- This has a beneficial affect on the performance of the system; particularly, it allows exploiting the available resources to their best (without any dead time).
- Naturally, in order to satisfy local and specific requirements, a person skilled in the art may apply to the solution described above many modifications and alterations. Particularly, although the present invention has been described with a certain degree of particularity with reference to preferred embodiment(s) thereof, it should be understood that various omissions, substitutions and changes in the form and details as well as other embodiments are possible; moreover, it is expressly intended that specific elements and/or method steps described in connection with any disclosed embodiment of the invention may be incorporated in any other embodiment as a general matter of design choice.
- For example, similar considerations apply if the system has a different architecture or includes equivalent units. Moreover, each computer may have another structure or may include similar elements (such as cache memories temporarily storing the programs or parts thereof to reduce the accesses to the mass memory during execution); in any case, it is possible to replace the computer with any code execution entity (such as a PDA, a mobile phone, and the like).
- In any case, the invention has equal applicability to equivalent schedulers (for example, having another architecture, working on a single computer, or used to control the execution of other work units such as interactive tasks). Alternatively, the proposed solution is used to update different memory structures (such as a table), or by other modules of the scheduler even performing different operations (such as the controller).
- However, the reference to the scheduler is not to be intended as a limitation. Indeed, the solution of the invention can be used to update any shared resource by generic processes. For example, the resource may be a database, a software application, or the configuration of a hardware device; likewise, the processes may be different software programs (even remote to each other), computers, subjects, and the like. In addition, the term update as used herein refers to any operation intended to modify the resource (for example, involving an addition, deletion, or change to any aspect relating to the resource or a part thereof).
- It should be readily apparent that the same result may be achieved if the handler works on the copy of the original control file while the planner updates its original version. Moreover, nothing prevents having the planner perform the copy of the control file (instead of the handler). The order of same operations may also be changed without affecting the result of the process (such as by deleting both the original control file and the log at the end); likewise, it is possible to retain this information for a predefined period (for example, for security reasons).
- The concepts of the present invention are also applicable when the updates (applied by the two processes) are not commutative, but any difference resulting from their order of application to the shared resource is acceptable. For example, this is possible when the differences only relate to minor information that is not vital for the specific application, or when the first process (normally controlling the shared resource) overrides or disregards the differences introduced by the second process (during its massive update).
- In another embodiment of the invention, the handler may log the updates only without actually changing the control file (for example, when every update is independent of the previous ones); in this case, these updates are applied only once after the planner completes the writing of the new plan.
- The solution of the invention lends itself to be applied even when the handler is not event-driven; in any case, it is always possible to log the actual updates that are applied to the control file by the handler (or any equivalent indication thereof).
- Even if the proposed solution has been specifically devised for performing massive updates to the shared resource, this is not to be interpreted in a limitative manner. More generally, the technical idea of the invention may be applied to any kind of processes (even when they involve different numbers of updates of any length).
- Similar considerations apply if the program (which may be used to implement the invention) is structured in a different way, or if additional modules or functions are provided; likewise, the memory structures may be of other types, or may be replaced with equivalent entities (not necessarily consisting of physical storage media). Moreover, the proposed solution lends itself to be implemented with an equivalent method (for example, with similar or additional steps). In any case, the program may take any form suitable to be used by or in connection with any data processing system, such as external or resident software, firmware, or microcode (either in object code or in source code). Moreover, the program may be provided on any computer-usable medium; the medium can be any element suitable to contain, store, communicate, propagate, or transfer the program. Examples of such medium are fixed disks (where the program can be pre-loaded), removable disks, tapes, cards, wires, fibers, wireless connections, networks, broadcast waves, and the like; for example, the medium may be of the electronic, magnetic, optical, electromagnetic, infrared, or semiconductor type.
- In any case, the solution according to the present invention lends itself to be carried out with a hardware structure (for example, integrated in a chip of semiconductor material), or with a combination of software and hardware.
Claims (15)
1. A method for updating a shared resource in a data processing system by a first process and a second process in mutual exclusion, the method including the steps of:
the first process accessing the shared resource for applying first updates,
the second process notifying a request to the first process for applying at least one second update to the shared resource,
making a copy of the shared resource in response to the request,
logging an indication of each first update,
applying the at least one second update to a selected resource between the shared resource and the copy of the shared resource,
applying each logged first update to the selected resource in response to the completion of the application of the at least one second update, and
causing the first process to access the selected resource.
2. The method according to claim 1 , wherein the first updates and the at least one second update are commutative.
3. The method according to claim 1 , further including the step of:
applying each first update to the unselected resource.
4. The method according to claim 1 , wherein each first update is driven by an event being notified to the first process, the step of logging the indication of each first update including:
storing an indication of the corresponding event.
5. The method according to claim 1 , wherein the at least one second update consists of a plurality of second updates, an application time of the second updates being substantially higher than the application time of each first update.
6. The method according to claim 1 , wherein the shared resource consists of a memory structure storing a plan of a scheduling application, the plan including an indication of a plurality of work units to be executed on the data processing system.
7. The method according to claim 6 , wherein the first process is a handler for storing an indication of the execution state of each work unit in the memory structure and the second process is a planner for writing the plan for a predefined period in the memory structure.
8. The method according to claim 7 , wherein the handler is adapted to update the indication of the execution state of each pending work unit and wherein the planner is adapted to insert new work units or to remove completed work units.
9. (canceled)
10. (canceled)
11. A computer program product a computer-usable medium a computer program, the computer program when executed on a data processing system causing the system to perform a method for updating a shared resource by a first process and a second process in mutual exclusion, the method including the steps of:
the first process accessing the shared resource for applying first updates,
the second process notifying a request to the first process for applying at least one second update to the shared resource,
making a copy of the shared resource in response to the request, logging an indication of each first update,
applying the at least one second update to a selected resource between the shared resource and the copy of the shared resource,
applying each logged first update to the selected resource in response to the completion of the application of the at least one second update, and
causing the first process to access the selected resource.
12. A data processing system including a shared resource, a first software module for implementing a first process and a second software module for implementing a second process for updating the shared resource in mutual exclusion, wherein the system includes:
means for causing the first process to access the shared resource for applying first updates,
means for causing the second process to notify a request to the first process for applying at least one second update to the shared resource,
means for making a copy of the shared resource in response to the request,
means for logging an indication of each first update,
means for applying the at least one second update to a selected resource between the shared resource and the copy of the shared resource,
means for applying each logged first update to the selected resource in response to the completion of the application of the at least one second update, and
means causing the first process to access the selected resource.
13. The system according to claim 12 , wherein the shared resource consists of a memory structure storing a plan of a scheduling application, the plan including an indication of a plurality of work units to be executed on the data processing system.
14. The system according to claim 13 , wherein the first process is a handler for storing an indication of the execution state of each work unit in the memory structure and the second process is a planner for writing the plan for a predefined period in the memory structure.
15. The system according to claim 14 , wherein the handler is adapted to update the indication of the execution state of each pending work unit and wherein the planner is adapted to insert new work units or to remove completed work units.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP05105418 | 2005-06-20 | ||
EP05105418.7 | 2005-06-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
US20060288049A1 true US20060288049A1 (en) | 2006-12-21 |
Family
ID=37574635
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/425,230 Abandoned US20060288049A1 (en) | 2005-06-20 | 2006-06-20 | Method, System and computer Program for Concurrent File Update |
Country Status (1)
Country | Link |
---|---|
US (1) | US20060288049A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090300633A1 (en) * | 2008-05-28 | 2009-12-03 | International Business Machines Corporation | Method and System for Scheduling and Controlling Backups in a Computer System |
US20100153183A1 (en) * | 1996-09-20 | 2010-06-17 | Strategyn, Inc. | Product design |
US20140164292A1 (en) * | 2008-05-30 | 2014-06-12 | Strategyn Holdings, Llc | Commercial investment analysis |
US9135633B2 (en) | 2009-05-18 | 2015-09-15 | Strategyn Holdings, Llc | Needs-based mapping and processing engine |
US20190086218A1 (en) * | 2017-09-15 | 2019-03-21 | Accenture Global Solutions Limited | Navigation system for a multi-dimensional parameter space |
Citations (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5434994A (en) * | 1994-05-23 | 1995-07-18 | International Business Machines Corporation | System and method for maintaining replicated data coherency in a data processing system |
US5987477A (en) * | 1997-07-11 | 1999-11-16 | International Business Machines Corporation | Parallel file system and method for parallel write sharing |
US20020099843A1 (en) * | 2001-01-24 | 2002-07-25 | International Business Machines Corporation | Method, system, and program for managing client access to a shared resource |
US20020188597A1 (en) * | 2000-09-01 | 2002-12-12 | Jonathan Kern | Methods and systems for linking tasks to workflow |
US6526416B1 (en) * | 1998-06-30 | 2003-02-25 | Microsoft Corporation | Compensating resource managers |
US20030074394A1 (en) * | 2001-10-16 | 2003-04-17 | Kave Eshghi | Effectively and efficiently updating content files among duplicate content servers |
US20030074355A1 (en) * | 2001-03-23 | 2003-04-17 | Restaurant Services, Inc. ("RSI"). | System, method and computer program product for a secure supply chain management framework |
US20030088570A1 (en) * | 2001-11-05 | 2003-05-08 | Fuji Xerox Co., Ltd. | Systems and methods for operating a multi-user document device via a personal device portal |
US20030149709A1 (en) * | 2002-02-05 | 2003-08-07 | International Business Machines Corporation | Consolidation of replicated data |
US20030158862A1 (en) * | 2002-02-15 | 2003-08-21 | International Business Machines Corporation | Standby file system with snapshot feature |
US20030182312A1 (en) * | 2002-03-19 | 2003-09-25 | Chen Raymond C. | System and method for redirecting access to a remote mirrored snapshop |
US20030188160A1 (en) * | 2001-08-02 | 2003-10-02 | Singam Sunder | Method and system to securely update files via a network |
US20040039723A1 (en) * | 2001-02-08 | 2004-02-26 | In Lee | Systems and methods for distributing multimedia presentations over a network |
US20040062130A1 (en) * | 2002-09-30 | 2004-04-01 | Chiang Ying-Hsin Robert | Updating electronic files using byte-level file differencing and updating algorithms |
US20040243633A1 (en) * | 2003-03-18 | 2004-12-02 | British Telecommunications Public Limited Company | Access control to shared resources |
US20050010607A1 (en) * | 2003-07-10 | 2005-01-13 | Parker James A. | Collaborative file update system |
US20050022047A1 (en) * | 2003-07-21 | 2005-01-27 | Oracle International Corporation | Conditional data access after database system failure |
US20050033600A1 (en) * | 2000-06-22 | 2005-02-10 | Geddes Norman D. | Method and system for dynamic business process management using a partial order planner |
US6859812B1 (en) * | 2001-10-31 | 2005-02-22 | Hewlett-Packard Development Company, L.P. | System and method for differentiating private and shared files within a computer cluster |
US20050055402A1 (en) * | 2003-09-09 | 2005-03-10 | Eiichi Sato | File sharing device and inter-file sharing device data migration method |
US20050066095A1 (en) * | 2003-09-23 | 2005-03-24 | Sachin Mullick | Multi-threaded write interface and methods for increasing the single file read and write throughput of a file server |
US20050102399A1 (en) * | 2003-11-10 | 2005-05-12 | Hitachi, Ltd. | Exclusive control system and method for using shared resource |
US20050108298A1 (en) * | 2003-11-17 | 2005-05-19 | Iyengar Arun K. | System and method for achieving different levels of data consistency |
US20050182774A1 (en) * | 2002-05-17 | 2005-08-18 | Weir John M. | Transaction management system |
US20050192963A1 (en) * | 2002-02-26 | 2005-09-01 | Tschiegg Mark A. | Risk management information interface system and associated methods |
US20050240613A1 (en) * | 2004-04-21 | 2005-10-27 | Logan Carmen Jr | Portable health care history information system |
US6973549B1 (en) * | 2001-12-10 | 2005-12-06 | Incipient, Inc. | Locking technique for control and synchronization |
US20050278374A1 (en) * | 2004-05-21 | 2005-12-15 | Bea Systems, Inc. | Dynamic program modification |
US20050283780A1 (en) * | 2004-06-16 | 2005-12-22 | Karp Alan H | Synchronization of threads in a multithreaded computer program |
US20060026165A1 (en) * | 2004-07-19 | 2006-02-02 | Microsoft Corporation | Remote file updates through remote protocol |
US20060112121A1 (en) * | 2004-11-23 | 2006-05-25 | Mckenney Paul E | Atomically moving list elements between lists using read-copy update |
US20060136516A1 (en) * | 2004-12-16 | 2006-06-22 | Namit Jain | Techniques for maintaining consistency for different requestors of files in a database management system |
US7143410B1 (en) * | 2000-03-31 | 2006-11-28 | Intel Corporation | Synchronization mechanism and method for synchronizing multiple threads with a single thread |
US7240091B1 (en) * | 1999-10-04 | 2007-07-03 | Microsoft Corporation | Method and system for supporting off-line mode of operation and synchronization |
US7330862B1 (en) * | 2003-04-25 | 2008-02-12 | Network Appliance, Inc. | Zero copy write datapath |
-
2006
- 2006-06-20 US US11/425,230 patent/US20060288049A1/en not_active Abandoned
Patent Citations (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5434994A (en) * | 1994-05-23 | 1995-07-18 | International Business Machines Corporation | System and method for maintaining replicated data coherency in a data processing system |
US5987477A (en) * | 1997-07-11 | 1999-11-16 | International Business Machines Corporation | Parallel file system and method for parallel write sharing |
US6526416B1 (en) * | 1998-06-30 | 2003-02-25 | Microsoft Corporation | Compensating resource managers |
US7240091B1 (en) * | 1999-10-04 | 2007-07-03 | Microsoft Corporation | Method and system for supporting off-line mode of operation and synchronization |
US7143410B1 (en) * | 2000-03-31 | 2006-11-28 | Intel Corporation | Synchronization mechanism and method for synchronizing multiple threads with a single thread |
US20050033600A1 (en) * | 2000-06-22 | 2005-02-10 | Geddes Norman D. | Method and system for dynamic business process management using a partial order planner |
US20020188597A1 (en) * | 2000-09-01 | 2002-12-12 | Jonathan Kern | Methods and systems for linking tasks to workflow |
US20020099843A1 (en) * | 2001-01-24 | 2002-07-25 | International Business Machines Corporation | Method, system, and program for managing client access to a shared resource |
US20040039723A1 (en) * | 2001-02-08 | 2004-02-26 | In Lee | Systems and methods for distributing multimedia presentations over a network |
US20030074355A1 (en) * | 2001-03-23 | 2003-04-17 | Restaurant Services, Inc. ("RSI"). | System, method and computer program product for a secure supply chain management framework |
US20030188160A1 (en) * | 2001-08-02 | 2003-10-02 | Singam Sunder | Method and system to securely update files via a network |
US20030074394A1 (en) * | 2001-10-16 | 2003-04-17 | Kave Eshghi | Effectively and efficiently updating content files among duplicate content servers |
US6859812B1 (en) * | 2001-10-31 | 2005-02-22 | Hewlett-Packard Development Company, L.P. | System and method for differentiating private and shared files within a computer cluster |
US20030088570A1 (en) * | 2001-11-05 | 2003-05-08 | Fuji Xerox Co., Ltd. | Systems and methods for operating a multi-user document device via a personal device portal |
US6973549B1 (en) * | 2001-12-10 | 2005-12-06 | Incipient, Inc. | Locking technique for control and synchronization |
US20030149709A1 (en) * | 2002-02-05 | 2003-08-07 | International Business Machines Corporation | Consolidation of replicated data |
US20030158862A1 (en) * | 2002-02-15 | 2003-08-21 | International Business Machines Corporation | Standby file system with snapshot feature |
US20050192963A1 (en) * | 2002-02-26 | 2005-09-01 | Tschiegg Mark A. | Risk management information interface system and associated methods |
US20030182312A1 (en) * | 2002-03-19 | 2003-09-25 | Chen Raymond C. | System and method for redirecting access to a remote mirrored snapshop |
US20050182774A1 (en) * | 2002-05-17 | 2005-08-18 | Weir John M. | Transaction management system |
US20040062130A1 (en) * | 2002-09-30 | 2004-04-01 | Chiang Ying-Hsin Robert | Updating electronic files using byte-level file differencing and updating algorithms |
US20040243633A1 (en) * | 2003-03-18 | 2004-12-02 | British Telecommunications Public Limited Company | Access control to shared resources |
US7330862B1 (en) * | 2003-04-25 | 2008-02-12 | Network Appliance, Inc. | Zero copy write datapath |
US20050010607A1 (en) * | 2003-07-10 | 2005-01-13 | Parker James A. | Collaborative file update system |
US20050022047A1 (en) * | 2003-07-21 | 2005-01-27 | Oracle International Corporation | Conditional data access after database system failure |
US20050055402A1 (en) * | 2003-09-09 | 2005-03-10 | Eiichi Sato | File sharing device and inter-file sharing device data migration method |
US20050066095A1 (en) * | 2003-09-23 | 2005-03-24 | Sachin Mullick | Multi-threaded write interface and methods for increasing the single file read and write throughput of a file server |
US20050102399A1 (en) * | 2003-11-10 | 2005-05-12 | Hitachi, Ltd. | Exclusive control system and method for using shared resource |
US20050108298A1 (en) * | 2003-11-17 | 2005-05-19 | Iyengar Arun K. | System and method for achieving different levels of data consistency |
US20050240613A1 (en) * | 2004-04-21 | 2005-10-27 | Logan Carmen Jr | Portable health care history information system |
US20050278374A1 (en) * | 2004-05-21 | 2005-12-15 | Bea Systems, Inc. | Dynamic program modification |
US20050283780A1 (en) * | 2004-06-16 | 2005-12-22 | Karp Alan H | Synchronization of threads in a multithreaded computer program |
US20060026165A1 (en) * | 2004-07-19 | 2006-02-02 | Microsoft Corporation | Remote file updates through remote protocol |
US20060112121A1 (en) * | 2004-11-23 | 2006-05-25 | Mckenney Paul E | Atomically moving list elements between lists using read-copy update |
US20060136516A1 (en) * | 2004-12-16 | 2006-06-22 | Namit Jain | Techniques for maintaining consistency for different requestors of files in a database management system |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100153183A1 (en) * | 1996-09-20 | 2010-06-17 | Strategyn, Inc. | Product design |
US20090300633A1 (en) * | 2008-05-28 | 2009-12-03 | International Business Machines Corporation | Method and System for Scheduling and Controlling Backups in a Computer System |
US8566285B2 (en) * | 2008-05-28 | 2013-10-22 | International Business Machines Corporation | Method and system for scheduling and controlling backups in a computer system |
US20140164292A1 (en) * | 2008-05-30 | 2014-06-12 | Strategyn Holdings, Llc | Commercial investment analysis |
US8924244B2 (en) * | 2008-05-30 | 2014-12-30 | Strategyn Holdings, Llc | Commercial investment analysis |
US10592988B2 (en) | 2008-05-30 | 2020-03-17 | Strategyn Holdings, Llc | Commercial investment analysis |
US9135633B2 (en) | 2009-05-18 | 2015-09-15 | Strategyn Holdings, Llc | Needs-based mapping and processing engine |
US20190086218A1 (en) * | 2017-09-15 | 2019-03-21 | Accenture Global Solutions Limited | Navigation system for a multi-dimensional parameter space |
US10677600B2 (en) * | 2017-09-15 | 2020-06-09 | Accenture Global Solutions Limited | Navigation system for a multi-dimensional parameter space |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11061884B2 (en) | Method and system to accelerate transaction commit using non-volatile memory | |
US8171481B2 (en) | Method and system for scheduling jobs based on resource relationships | |
US9851996B2 (en) | Applying firmware updates in a system with zero downtime by selectively offlining and onlining hardware using a scale-up hypervisor layer | |
RU2429529C2 (en) | Dynamic configuration, allocation and deployment of computer systems | |
US8914805B2 (en) | Rescheduling workload in a hybrid computing environment | |
US8332443B2 (en) | Masterless distributed batch scheduling engine | |
JP7112809B2 (en) | Scheduling robots for robotic process automation | |
US7904906B2 (en) | Tracking modified pages on a computer system | |
US20120011496A1 (en) | Service providing apparatus, service providing system, method of processing data in service providing apparatus, and computer program | |
US7992148B2 (en) | Issuing syncpoints during execution of a batch application to minimize or eliminate periods of record unavailability due to batch related record locking | |
US8918783B2 (en) | Managing virtual computers simultaneously with static and dynamic dependencies | |
WO2010066547A2 (en) | Shared resource service provisioning using a virtual machine manager | |
US9250960B2 (en) | Planning execution of tasks with dependency resolution | |
US7797484B2 (en) | Managing access to disk storage | |
US20090158286A1 (en) | Facility for scheduling the execution of jobs based on logic predicates | |
US20060288049A1 (en) | Method, System and computer Program for Concurrent File Update | |
US8060885B2 (en) | Creating task queries for concrete resources using alias selection fields specifying formal resources and formal relationships | |
US20090089772A1 (en) | Arrangement for scheduling jobs with rules and events | |
US20130086013A1 (en) | System and method of saving files to a version control system ("vcs") repository | |
US20080016192A1 (en) | System and Method for Performing an Administrative Task upon the Occurrence of a Triggering Event | |
US8239870B2 (en) | Scheduling execution of work units with policy based extension of long-term plan | |
EP1736872A2 (en) | Method, system and computer program for concurrent file update | |
JP4554581B2 (en) | Job management apparatus, system and program | |
US8762776B2 (en) | Recovering from a thread hang | |
JP5695420B2 (en) | Method, system, and computer program for scheduling execution of jobs driven by events |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:BENEDETTI, FABIO;IANNUCCI, PIETRO;MOSSOTO, FRANCO;REEL/FRAME:017820/0819 Effective date: 20060612 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |