Skip to content

Commit

Permalink
remove LAPI, PVM. Resolves #111 (#331)
Browse files Browse the repository at this point in the history
* remove LAPI, PVM. Resolves #111

* armci portals remnants
  • Loading branch information
ajaypanyala authored Jul 26, 2024
1 parent 4a4f33f commit 899431a
Show file tree
Hide file tree
Showing 65 changed files with 91 additions and 4,456 deletions.
8 changes: 0 additions & 8 deletions armci/Makefile.am
Original file line number Diff line number Diff line change
Expand Up @@ -160,14 +160,6 @@ if ARMCI_NETWORK_CRAY_SHMEM
AM_CPPFLAGS += -I$(top_srcdir)/src/devices/cray-shmem
libarmci_la_SOURCES += src/memory/shmalloc.c
endif
if ARMCI_NETWORK_LAPI
AM_CPPFLAGS += -I$(top_srcdir)/src/devices/lapi
libarmci_la_SOURCES += src/common/async.c
libarmci_la_SOURCES += src/common/request.c
libarmci_la_SOURCES += src/devices/lapi/lapi.c
libarmci_la_SOURCES += src/devices/lapi/lapidefs.h
libarmci_la_SOURCES += src/memory/buffers.c
endif
if ARMCI_NETWORK_MPI_MT
AM_CPPFLAGS += -I$(top_srcdir)/src/devices/mpi-mt
libarmci_la_SOURCES += src/common/ds-shared.c
Expand Down
69 changes: 2 additions & 67 deletions armci/README
Original file line number Diff line number Diff line change
Expand Up @@ -58,19 +58,15 @@ Index
-----
1. Supported Platforms
2. General Settings
3. Building ARMCI on SGI.
4. Building ARMCI on IBM.
5. Building ARMCI on CRAY.
6. Building ARMCI on other platforms
7. Platform specific issues/tuning

Supported Platforms
-------------------
- leadership class machines: Cray XE6, Cray XTs, IBM Blue Gene/L, IBM Blue
Gene /P
- shared-memory systems: SUN Solaris, SGI, SGI Altix, IBM, Linux, DEC, HP,
Cray SV1, Cray X1, and Windows NT/95/2000
- distributed-memory systems: Cray T3E, IBM SP(TARGET=LAPI), FUJITSU VX/VPP.
- leadership class machines: Cray XT/XE/XK/XC, IBM Blue Gene/Q.
- shared-memory systems: SUN Solaris, SGI Altix, IBM, Linux
- clusters of workstations (InfiniBand, sockets)

configure options
Expand Down Expand Up @@ -135,7 +131,6 @@ specified.
--with-cray-shmem=ARG select armci network as Cray XT shmem
--with-dcmf=ARG select armci network as IBM BG/P Deep Computing
Message Framework
--with-lapi=ARG select armci network as IBM LAPI
--with-mpi-spawn=ARG select armci network as MPI-2 dynamic process mgmt
--with-openib=ARG select armci network as InfiniBand OpenIB
--with-sockets=ARG select armci network as Ethernet TCP/IP (default)
Expand Down Expand Up @@ -236,66 +231,6 @@ All tests have a per-test log file containing the output of the test. So if
the test is testing/test.x, the log file would be testing/test.log. The output
of failed tests is collected in the top-level log summary test-suite.log.

ANCIENT WISDOM
==============

Building on SGI
---------------

For running on SGI machines running the irix os, three target settings are
available:

- TARGET=SGI generates a MIPS-4 64-bit code with 32-bit address space when
compiling on any R8000 based machines and a 32 bit MPIS-2 code on any
non-R8000 machines.
- Use TARGET=SGI64 For generating a 64 bit code with 64-bit address space.
- TARGET=SGI_N32 generates a 32bit code with a 32bit address space.

By default, SGI_N32 generates a MIPS3 code and SGI64 generates a MIPS4 code.

There is a possibility of conflict between the SGI's implementation of MPI
(but not others, MPICH for example) and ARMCI in their use of the SGI specific
inter-processor communication facility called arena.

Building on IBM
---------------

Running on IBM without LAPI
+++++++++++++++++++++++++++

On IBM's running AIX, target can be set to IBM or IBM64 to run 32/64 bit
versions of the code.

Running on the IBM-SP
+++++++++++++++++++++

TARGET on IBM-SP can be set to LAPI (LAPI64 for 64 bit object). POE
environment variable settings for the parallel environment PSSP 3.1:

- ARMCI applications like any other LAPI-based codes must define
MP_MSG_API=lapi or MP_MSG_API=mpi,lapi (when using ARMCI and MPI)
- The LAPI-based implementation of ARMCI cannot be used on the very old SP-2
systems because LAPI did not support the TB2 switch used in those models.
If in doubt which switch you got use odmget command: odmget -q name=css0
CuDv
- For AIX versions 4.3.1 and later, environment variable AIXTHREAD_SCOPE=S
must be set to assure correct operation of LAPI (IBM should do it in PSSP
by default).
- Under AIX 4.3.3 and later an additional environment variable is
required(RT_GRQ=ON) to restore the original thread scheduling that LAPI
relies on.

Building on CRAY
----------------

- TARGET environment variable is also used by cc on CRAY. It has to be set to
CRAY-SV1 on SV1, CRAY-YMP on YMP, CRAY-T3E on T3E. ARMCI on CRAY'S hence
uses the same values to this environment variable as cc requires.

- On CRAY-T3E, ARMCI can be run with either of the CRAY Message Passing
Libraries(PVM and MPI). For more information on running with PVM look at
docs/README.PVM. If running with PVM, MSG_COMMS has to be set to PVM.

Building on other platforms
---------------------------

Expand Down
9 changes: 0 additions & 9 deletions armci/configure.ac
Original file line number Diff line number Diff line change
Expand Up @@ -42,15 +42,6 @@ AS_IF([test "$ARMCI_TOP_BUILDDIR" != "$ARMCI_TOP_SRCDIR"],
# MPI compiler wrappers instead of the standard compilers.
GA_MSG_COMMS

# Hack to enable NEW_MALLOC feature
AC_ARG_ENABLE([portals-new-malloc],
[AS_HELP_STRING([--enable-portals-new-malloc],
[add -DNEW_MALLOC to CPPFLAGS])])
AS_IF([test "x$enable_portals_new_malloc" = xyes],
[AC_DEFINE([NEW_MALLOC], [1], [for portals, enable new malloc])])
AM_CONDITIONAL([PORTALS_ENABLE_NEW_MALLOC],
[test "x$enable_portals_new_malloc" = xyes])

ARMCI_ENABLE_GPC
ARMCI_ENABLE_GROUP
ARMCI_ENABLE_NB_NONCONT
Expand Down
41 changes: 0 additions & 41 deletions armci/doc/README.PVM

This file was deleted.

120 changes: 0 additions & 120 deletions armci/doc/README.myrinet

This file was deleted.

37 changes: 0 additions & 37 deletions armci/examples/features/aggregation/simple/simple.c
Original file line number Diff line number Diff line change
Expand Up @@ -77,43 +77,6 @@
/***************************** global data *******************/
int me, nproc;
void* work[MAXPROC]; /* work array for propagating addresses */



#ifdef MSG_COMMS_PVM
void pvm_init(int argc, char *argv[])
{
int mytid, mygid, ctid[MAXPROC];
int np, i;

mytid = pvm_mytid();
if((argc != 2) && (argc != 1)) goto usage;
if(argc == 1) np = 1;
if(argc == 2)
if((np = atoi(argv[1])) < 1) goto usage;
if(np > MAXPROC) goto usage;

mygid = pvm_joingroup(MPGROUP);

if(np > 1)
if (mygid == 0)
i = pvm_spawn(argv[0], argv+1, 0, "", np-1, ctid);

while(pvm_gsize(MPGROUP) < np) sleep(1);

/* sync */
pvm_barrier(MPGROUP, np);

printf("PVM initialization done!\n");

return;

usage:
fprintf(stderr, "usage: %s <nproc>\n", argv[0]);
pvm_exit();
exit(-1);
}
#endif

void create_array(void *a[], int elem_size, int ndim, int dims[])
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -82,41 +82,6 @@ short int fortran_indexing=0;
static int proc_row_list[MAXPROC];/*no of rows owned by each process - accumulated*/
static int proc_nz_list[MAXPROC]; /*no of non-zeros owned by each process */

#ifdef MSG_COMMS_PVM
void pvm_init(int argc, char *argv[])
{
int mytid, mygid, ctid[MAXPROC];
int np, i;

mytid = pvm_mytid();
if((argc != 2) && (argc != 1)) goto usage;
if(argc == 1) np = 1;
if(argc == 2)
if((np = atoi(argv[1])) < 1) goto usage;
if(np > MAXPROC) goto usage;

mygid = pvm_joingroup(MPGROUP);

if(np > 1)
if (mygid == 0)
i = pvm_spawn(argv[0], argv+1, 0, "", np-1, ctid);

while(pvm_gsize(MPGROUP) < np) sleep(1);

/* sync */
pvm_barrier(MPGROUP, np);

printf("PVM initialization done!\n");

return;

usage:
fprintf(stderr, "usage: %s <nproc>\n", argv[0]);
pvm_exit();
exit(-1);
}
#endif

void create_array(void *a[], int elem_size, int ndim, int dims[])
{
int bytes=elem_size, i, rc;
Expand Down
Loading

0 comments on commit 899431a

Please sign in to comment.