mdcore | |
About Features Download Publications Documentation Tutorial Compiling mdcore Using mdcore Examples |
Compiling mdcore Download and unpack the sources. This should create a directory named "mdcore". Change to this directory and execute the configuration script as follows cd mdcore The "configure" script will try to guess the best compiler options and the location of the required libraries. To see the available options, type ./configure --help The most interesting options are
Once the configuration process has completed successfully, compile the library by typing makeThis will generate the mdcore libraries in the sub-folder "src/.libs". Some notes on OpenMP on Linux mdcore relies on both pthreads and OpenMP for parallelization, using the former to parallelize the force computation and the latter for more simple tasks such as updating the particle positions or computing the system temperature. Depending on your OpenMP-implementation, the following environment variables may be useful:
For convenience, these variables can be set on the command line: OMP_WAIT_POLICY=PASSIVE OMP_NUM_THREADS=8 ./simulation ... Using mdcore Writing programs using mdcore is relatively simple, and consists of four basic steps:
Any program that uses mdcore needs to include the mdcore header file: #include <mdcore.h>The program must also be linked against any of the following library objects generated when compiling mdcore:
The main object with which programs interact with mdcore is the "engine": struct engine e;which is initialized with the function "engine_init", e.g. where the parameters have the following types and meaning:if ( engine_init( &e , origin , dim , L , cutoff , space_periodic_full , max_types , engine_flags ) != 0 ) { errs_dump(stdout); abort(); }
Once the engine has been initialized, particles can be added to it, either individually or all at once. Each particle has a particle type associated with it. Particle types can be specified with the function "engine_addtype", e.g.: with the parametersif ( ( tid = engine_addtype( &e , mass , charge , name , name2 ) ) != 0 ) { errs_dump(stdout); abort(); }
Individual particles can be added with the function "space_addpart", e.g.: with the parametersif ( space_addpart( &(e.s) , &p , x ) != 0 ) { errs_dump(stdout); abort(); }
Alternatively, several particles can be added at once with the function "engine_load", e.g. with the parametersif ( engine_load( &e , x , v , type , pid , vid , q , flags , N ) != 0 ) { errs_dump(stdout); abort(); }
Note that the particle positions and velocities are specified in nanometers and nanometers per picosecond respectively. Once the particle types have been specified, interaction potentials can be created and associated to pairs of types. The interaction potentials themselves are stored as least-squares piecewise polynomials which can either be constructed from one of the pre-defined potential function types:
struct potential pot_NeNe; if ( ( pot_NeNe = potential_create_LJ126( 0.2 , cutoff , 2.6513e-06 , 5.7190e-03 , 1.0e-3 ) ) == NULL ) { errs_dump(stdout); abort(); } Non-bonded interaction potentials between particles are added to the engine using the "engine_addpot" function, e.g. where tid1 and tid2 are the type IDs of the interacting particles as returned by "engine_addtype", as described earlier.if ( engine_addpot( &e , pot , tid1 , tid2 ) < 0 ) { errs_dump(stdout); abort(); } Similarly, bonded interactions are added with the function "engine_bond_addpot", e.g. Which particles themselves are bonded is then specified withif ( engine_bond_addpot( &e , pot , tid1 , tid2 ) < 0 ) { errs_dump(stdout); abort(); } where pid1 and pid2 are particle IDs as specified when adding particles to the system. Similarly, angular and dihedral potentials and bonds are added with engine_angle_addpot and engine_dihedral_addpot, and engine_angle_add and engine_dihedral_add, respectively.if ( engine_bond_add( &e , pid1 , pid2 ) < 0 ) { errs_dump(stdout); abort(); } If the non-bonded interaction between to bonded particles is to be excluded, this has to be specified explicitly via the "engine_exclusion_add" function, e.g. If any such exclusions have been added to the engine, redundancies between them can be removed by calling "engine_exclusion_shrink".if ( engine_exclusion_add( &e , pid1 , pid2 ) < 0 ) { errs_dump(stdout); abort(); } Finally, holonomic constraints are added using the "engine_rigid_add" function, e.g. where d is the prescribed distance between the particles pid1 and pid2.if ( engine_rigid_add( &e , pid1 , pid2 , d ) < 0 ) { errs_dump(stdout); abort(); } After all the particles, potentials, bonded interactions, and holonomic constraints have been added to an engine, it has to be started with "engine_start", e.g. where nr_runners and nr_queues are the number of threads and task queues to use, respectively. As of this point, the engine is ready to integrate the equations of motion for the particles.if ( engine_start( &e , nr_runners , nr_queues ) < 0 ) { errs_dump(stdout); abort(); } Once the engine has been started, the computations for each time step, i.e. computing the bonded and non-bonded interactions, resolving the holonomic constraints and updating the particle positions and velocities, are computed with the function "engine_step", e.g.: where the time step e.dt is specified in picoseconds. Between the time steps, the particle data can be uloaded and re-loaded as described earlier with the functions engine_unload and engine_load, e.g. to adjust the velocities or plot output.e.time = 0; e.dt = 0.005; for ( i = 0 ; i < nrsteps ; i++ ) if ( engine_step( &e ) != 0 ) { errs_dump(stdout); abort(); } Example programs The directory "mdcore/examples" contains a number of small simulations which offer a useful starting point for experimenting with mdcore and implementing your own simulation.
./jac 5dhfr_cube.psf 5dhfr_cube.pdb n stepswhere n is the number of processors to use and steps are the number of time steps to simulate. The files 5dhfr_cube.psf and 5dhfr_cube.pdb contain the simulation structure. The interaction parameters are read from the file par_all22_prot.inp, which is in the same directory. The source file jac.c is set up to deal with all simulation options, e.g. verlet lists, pairwise verlet lists, MPI parallelisation, GPU parallelisation. The different options can be set by changing the engine flags in the call to "engine_init" at the start of the program. argon: A simple simulation consisting of 10x10x10 cells of bulk argon at 100 K (the number of particles is adjusted depending on the cell edge length). The simulation is executed with the following parameters./argon n steps dt Lwhere n is the number of processors to use, steps are the number of time steps of length dt picoseconds to take and L is the cell edge length to use. Two other executables, "argon_verlet" and "argon_pwverlet" are provided, which use Verlet and pairwise Verlet lists respectively. The "skin" width used in the lists is the supplied edge length L minus a fixed cutoff of 1.0 nm. Velocity scaling with a coupling constant of 0.1 is used during the first 10000 steps to maintain a temperature of 100 K. bulk: A bulk water simulation consisting of 8x8x8 cells of rigid SPC/E water molecules at 300 K (the number of molecules is adjusted depending on the cell edge length). The water molecules are kept rigid using the SHAKE algorithm to half the digits of machine precision. The simulation is executed with the same set of parameters as "argon" above.The executable "test" is linked with the double-precision version of mdcore whereas "test_single" is linked with the single-precision version. If mdcore was configures with the "--with-cell" option, then "test_cell" executes the simulation using as many SPUs as are available. hybrid: Similar to the "bulk" simulation above, yet with 16x16x16 cells of edge length 1.0 nm. The "hybrid" simulation requires mdcore to have been compiled with the "--enable-mpi" option and is executed as follows:mpirun -x OMP_WAIT_POLICY -x OMP_NUM_THREADS -np m ./hybrid n stepswhere m is the number of MPI nodes to use and n is the number of threads to use on each node. Since bisection is used to split the domain, it is recommended, for proper load balancing, that m be a power of 2. |