next up previous

6.3 host.list file

The host file should contain a list of all the nodes (or pools of nodes, but not both) you wish to run your code on. The first task will be run on the first node or pool listed, the second task will run on the second node or pool listed, etc. If you are using pools in the host file and do not have enough pools listed for all the tasks, the last tasks will use additional nodes within the last pool listed. However, if you are listing nodes, you must have at least as many nodes listed in the host file as you wish to run. You are allowed to repeat a node name within a host file. Doing so will cause your program to run multiple tasks on one node.

The default host file is host.list, but you can change the MP_HOSTFILE environment variable to be some other file name. If you have decided to run your code on ONE pool and have set MP_HOSTFILE to Null and RM_POOL to the appropriate pool number, you do not need to have a host file.

A sample host file using nodes:

!This is a comment line 
!Use an exclamation at the beginning of any comment
r25n09.tc.cornell.edu shared multiple
r25n10.tc.cornell.edu shared multiple
r25n11.tc.cornell.edu shared multiple
r25n12.tc.cornell.edu shared multiple
!
!Host nodes are named r25n09, r25n10, r25n11, and r25n12
!When using MPL, shared means you share with others.
!multiple means you allowing multiple MPL tasks on one node. 
!
!dedicated in place of shared or multiple means you do not want 
!to share with other people's MPL tasks, or you do not want 
!to allow multiple MPL tasks of your own on one node.

A sample host file using pools:

!This line is a comment
@0 shared multiple
@1 shared multiple
@3 shared multiple
@0 shared multiple
!0, 1, and 3 are the pool numbers.  
!Again, shared means you share with others.
!multiple means you allow multiple tasks on one node.
In this example, one node is chosen from pool 0 by the Resource daemon for the first task, one node from pool 1 is chosen for the next task, one node from pool 3 is chosen for the following task, and the nodes for any remaining task(s) are chosen from pool 0.