Complete benchmark for NN=2

This commit is contained in:
Eugen Betke 2018-10-22 20:43:16 +02:00
parent fe62d59925
commit 01c106f9b1
21 changed files with 2387 additions and 0 deletions

View File

@ -0,0 +1,46 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 1 -np 2 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 141180272640 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:1#API:MPIIO#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 31995 RUNNING AT isc17-c05
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:0@isc17-c04] HYDU_sock_write (utils/sock/sock.c:286): write error (Broken pipe)
[proxy:0:0@isc17-c04] main (pm/pmiserv/pmip.c:265): unable to send EXIT_STATUS command upstream
[mpiexec@isc17-c04] HYDT_bscu_wait_for_completion (tools/bootstrap/utils/bscu_wait.c:76): one of the processes terminated badly; aborting
[mpiexec@isc17-c04] HYDT_bsci_wait_for_completion (tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting for completion
[mpiexec@isc17-c04] HYD_pmci_wait_for_completion (pm/pmiserv/pmiserv_pmci.c:218): launcher returned error waiting for completion
[mpiexec@isc17-c04] main (ui/mpich/mpiexec.c:344): process manager error waiting for completion
+ /opt/ddn/mvapich/bin/mpiexec -ppn 1 -np 2 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 1 -np 2 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 141180272640 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/file_read -r
+ tee -a ./output/COUNT:1#NN:2#PPN:1#API:MPIIO#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 30373 RUNNING AT isc17-c04
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1

View File

@ -69,3 +69,38 @@ Summary:
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 23301455872.
read 185.04 137871360 1024.00 0.070422 120.02 0.000262 120.09 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 25202524160.
read 200.25 137871360 1024.00 0.000549 120.03 0.000204 120.03 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 26649559040.
read 211.75 137871360 1024.00 0.000529 120.03 0.000243 120.03 2
Max Read: 211.75 MiB/sec (222.03 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
read 211.75 185.04 199.01 10.94 120.04947 0 2 1 3 0 0 1 0 0 1 141180272640 1048576 23301455872 MPIIO 0
Finished: Mon Oct 22 18:35:10 2018
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1
/esfs/jtacquaviva/ioperf/file_write
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 6
obdidx objid objid group
6 12591746 0xc02282 0
0 12420132 0xbd8424 0
2 12566486 0xbfbfd6 0
4 12508204 0xbedc2c 0

View File

@ -0,0 +1,106 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 1 -np 2 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 141180272640 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:1#API:MPIIO#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior WARNING: fsync() only available in POSIX. Using value of 0.
Began: Mon Oct 22 20:29:36 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 141180272640 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 20:29:36 2018
Summary:
api = MPIIO (version=3, subversion=0)
test filename = /esfs/jtacquaviva/ioperf/file_write
access = single-shared-file
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 2 (1 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 131.48 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282360414208.
WARNING: Using actual aggregate bytes moved = 2798125056.
write 21.95 137871360 16.00 0.009253 121.57 0.000321 121.58 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282355695616.
WARNING: Using actual aggregate bytes moved = 2741010432.
write 21.48 137871360 16.00 0.009212 121.67 0.000328 121.68 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282359889920.
WARNING: Using actual aggregate bytes moved = 2737668096.
write 21.46 137871360 16.00 0.001321 121.66 0.000360 121.66 2
Max Write: 21.95 MiB/sec (23.01 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
write 21.95 21.46 21.63 0.22 121.64240 0 2 1 3 0 0 1 0 0 1 141180272640 16384 2798125056 MPIIO 0
Finished: Mon Oct 22 20:35:42 2018
+ /opt/ddn/mvapich/bin/mpiexec -ppn 1 -np 2 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 1 -np 2 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 141180272640 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/file_read -r
+ tee -a ./output/COUNT:1#NN:2#PPN:1#API:MPIIO#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior WARNING: fsync() only available in POSIX. Using value of 0.
Began: Mon Oct 22 20:35:45 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 141180272640 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/file_read -r
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 20:35:45 2018
Summary:
api = MPIIO (version=3, subversion=0)
test filename = /esfs/jtacquaviva/file_read
access = single-shared-file
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 2 (1 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 131.48 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 704462848.
read 5.53 137871360 16.00 0.007929 121.49 0.000261 121.50 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 704069632.
read 5.53 137871360 16.00 0.000607 121.49 0.000283 121.49 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 708395008.
read 5.56 137871360 16.00 0.000590 121.50 0.000329 121.50 2
Max Read: 5.56 MiB/sec (5.83 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
read 5.56 5.53 5.54 0.02 121.49728 0 2 1 3 0 0 1 0 0 1 141180272640 16384 704462848 MPIIO 0
Finished: Mon Oct 22 20:41:50 2018
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1
/esfs/jtacquaviva/ioperf/file_write
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 7
obdidx objid objid group
7 12456923 0xbe13db 0
1 12638199 0xc0d7f7 0
3 12486510 0xbe876e 0
5 12378565 0xbce1c5 0

View File

@ -0,0 +1,46 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 1 -np 2 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 141180272640 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:1#API:POSIX#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 30323 RUNNING AT isc17-c04
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
+ /opt/ddn/mvapich/bin/mpiexec -ppn 1 -np 2 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 1 -np 2 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 141180272640 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/indread2/file -r
+ tee -a ./output/COUNT:1#NN:2#PPN:1#API:POSIX#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_0]: [cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 31974 RUNNING AT isc17-c05
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:0@isc17-c04] HYDU_sock_write (utils/sock/sock.c:286): write error (Broken pipe)
[proxy:0:0@isc17-c04] main (pm/pmiserv/pmip.c:265): unable to send EXIT_STATUS command upstream
[mpiexec@isc17-c04] HYDT_bscu_wait_for_completion (tools/bootstrap/utils/bscu_wait.c:76): one of the processes terminated badly; aborting
[mpiexec@isc17-c04] HYDT_bsci_wait_for_completion (tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting for completion
[mpiexec@isc17-c04] HYD_pmci_wait_for_completion (pm/pmiserv/pmiserv_pmci.c:218): launcher returned error waiting for completion
[mpiexec@isc17-c04] main (ui/mpich/mpiexec.c:344): process manager error waiting for completion
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1

View File

@ -0,0 +1,116 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 1 -np 2 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 141180272640 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:1#API:POSIX#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
Began: Mon Oct 22 20:16:40 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 141180272640 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 20:16:40 2018
Summary:
api = POSIX
test filename = /esfs/jtacquaviva/ioperf/file_write
access = file-per-process
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 2 (1 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 131.48 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282359955456.
WARNING: Using actual aggregate bytes moved = 17441718272.
write 124.23 137871360 16.00 0.000268 133.89 0.000239 133.89 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282359463936.
WARNING: Using actual aggregate bytes moved = 16293085184.
write 116.15 137871360 16.00 0.000562 133.78 0.000250 133.78 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282359857152.
WARNING: Using actual aggregate bytes moved = 16879484928.
write 121.19 137871360 16.00 0.000577 132.83 0.000345 132.83 2
Max Write: 124.23 MiB/sec (130.27 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
write 124.23 116.15 120.52 3.33 133.50174 0 2 1 3 1 0 1 0 0 1 141180272640 16384 17441718272 POSIX 0
Finished: Mon Oct 22 20:23:26 2018
+ /opt/ddn/mvapich/bin/mpiexec -ppn 1 -np 2 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ tee -a ./output/COUNT:1#NN:2#PPN:1#API:POSIX#T:16384.txt
+ /opt/ddn/mvapich/bin/mpiexec -ppn 1 -np 2 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 141180272640 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/indread2/file -r
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
Began: Mon Oct 22 20:23:34 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 141180272640 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/indread2/file -r
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 20:23:34 2018
Summary:
api = POSIX
test filename = /esfs/jtacquaviva/indread2/file
access = file-per-process
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 2 (1 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 131.48 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282360545280.
WARNING: Using actual aggregate bytes moved = 390316032.
read 3.09 137871360 16.00 0.000343 120.53 0.000186 120.53 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282360545280.
WARNING: Using actual aggregate bytes moved = 395870208.
read 3.13 137871360 16.00 0.000177 120.52 0.000243 120.52 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282360545280.
WARNING: Using actual aggregate bytes moved = 402276352.
read 3.18 137871360 16.00 0.000261 120.54 0.000321 120.54 2
Max Read: 3.18 MiB/sec (3.34 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
read 3.18 3.09 3.13 0.04 120.52884 0 2 1 3 1 0 1 0 0 1 141180272640 16384 390316032 POSIX 0
Finished: Mon Oct 22 20:29:35 2018
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1
/esfs/jtacquaviva/ioperf/file_write.00000000
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 1
obdidx objid objid group
1 12638197 0xc0d7f5 0
3 12486508 0xbe876c 0
5 12378563 0xbce1c3 0
6 12591826 0xc022d2 0
/esfs/jtacquaviva/ioperf/file_write.00000001
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 6
obdidx objid objid group
6 12591825 0xc022d1 0
0 12420211 0xbd8473 0
2 12566565 0xbfc025 0
4 12508282 0xbedc7a 0

View File

@ -0,0 +1,59 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 2 -np 4 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 70590136320 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:2#API:MPIIO#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_3]: ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 30284 RUNNING AT isc17-c04
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
+ /opt/ddn/mvapich/bin/mpiexec -ppn 2 -np 4 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 2 -np 4 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 70590136320 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/file_read -r
+ tee -a ./output/COUNT:1#NN:2#PPN:2#API:MPIIO#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_3]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 31909 RUNNING AT isc17-c05
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:0@isc17-c04] HYD_pmcd_pmip_control_cmd_cb (pm/pmiserv/pmip_cb.c:912): assert (!closed) failed
[proxy:0:0@isc17-c04] HYDT_dmxu_poll_wait_for_event (tools/demux/demux_poll.c:76): callback returned error status
[proxy:0:0@isc17-c04] main (pm/pmiserv/pmip.c:256): demux engine error waiting for event
[mpiexec@isc17-c04] HYDT_bscu_wait_for_completion (tools/bootstrap/utils/bscu_wait.c:76): one of the processes terminated badly; aborting
[mpiexec@isc17-c04] HYDT_bsci_wait_for_completion (tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting for completion
[mpiexec@isc17-c04] HYD_pmci_wait_for_completion (pm/pmiserv/pmiserv_pmci.c:218): launcher returned error waiting for completion
[mpiexec@isc17-c04] main (ui/mpich/mpiexec.c:344): process manager error waiting for completion
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1

View File

@ -0,0 +1,106 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 2 -np 4 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 70590136320 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:2#API:MPIIO#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior WARNING: fsync() only available in POSIX. Using value of 0.
Began: Mon Oct 22 20:04:23 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 70590136320 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 20:04:23 2018
Summary:
api = MPIIO (version=3, subversion=0)
test filename = /esfs/jtacquaviva/ioperf/file_write
access = single-shared-file
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 4 (2 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 65.74 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282360512512.
WARNING: Using actual aggregate bytes moved = 3078602752.
write 24.22 68935680 16.00 0.011510 121.22 0.000340 121.23 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282360053760.
WARNING: Using actual aggregate bytes moved = 3053191168.
write 24.01 68935680 16.00 0.015544 121.27 0.000340 121.28 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282358824960.
WARNING: Using actual aggregate bytes moved = 3070181376.
write 24.16 68935680 16.00 0.001289 121.18 0.000369 121.18 2
Max Write: 24.22 MiB/sec (25.39 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
write 24.22 24.01 24.13 0.09 121.23130 0 4 2 3 0 0 1 0 0 1 70590136320 16384 3078602752 MPIIO 0
Finished: Mon Oct 22 20:10:28 2018
+ /opt/ddn/mvapich/bin/mpiexec -ppn 2 -np 4 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 2 -np 4 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 70590136320 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/file_read -r
+ tee -a ./output/COUNT:1#NN:2#PPN:2#API:MPIIO#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior WARNING: fsync() only available in POSIX. Using value of 0.
Began: Mon Oct 22 20:10:36 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 70590136320 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/file_read -r
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 20:10:36 2018
Summary:
api = MPIIO (version=3, subversion=0)
test filename = /esfs/jtacquaviva/file_read
access = single-shared-file
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 4 (2 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 65.74 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 1398800384.
read 11.01 68935680 16.00 0.008116 121.15 0.001167 121.16 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 1406582784.
read 11.07 68935680 16.00 0.001568 121.14 0.001220 121.15 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 1412562944.
read 11.12 68935680 16.00 0.001541 121.14 0.001105 121.15 2
Max Read: 11.12 MiB/sec (11.66 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
read 11.12 11.01 11.07 0.04 121.14940 0 4 2 3 0 0 1 0 0 1 70590136320 16384 1398800384 MPIIO 0
Finished: Mon Oct 22 20:16:39 2018
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1
/esfs/jtacquaviva/ioperf/file_write
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 6
obdidx objid objid group
6 12591822 0xc022ce 0
0 12420209 0xbd8471 0
2 12566562 0xbfc022 0
4 12508279 0xbedc77 0

View File

@ -0,0 +1,59 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 2 -np 4 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 70590136320 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:2#API:POSIX#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_3]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 30245 RUNNING AT isc17-c04
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
+ /opt/ddn/mvapich/bin/mpiexec -ppn 2 -np 4 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 2 -np 4 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 70590136320 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/indread2/file -r
+ tee -a ./output/COUNT:1#NN:2#PPN:2#API:POSIX#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_0]: [cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
[cli_3]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 30260 RUNNING AT isc17-c04
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:1@isc17-c05] HYD_pmcd_pmip_control_cmd_cb (pm/pmiserv/pmip_cb.c:912): assert (!closed) failed
[proxy:0:1@isc17-c05] HYDT_dmxu_poll_wait_for_event (tools/demux/demux_poll.c:76): callback returned error status
[proxy:0:1@isc17-c05] main (pm/pmiserv/pmip.c:256): demux engine error waiting for event
[mpiexec@isc17-c04] HYDT_bscu_wait_for_completion (tools/bootstrap/utils/bscu_wait.c:76): one of the processes terminated badly; aborting
[mpiexec@isc17-c04] HYDT_bsci_wait_for_completion (tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting for completion
[mpiexec@isc17-c04] HYD_pmci_wait_for_completion (pm/pmiserv/pmiserv_pmci.c:218): launcher returned error waiting for completion
[mpiexec@isc17-c04] main (ui/mpich/mpiexec.c:344): process manager error waiting for completion
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1

View File

@ -0,0 +1,140 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 2 -np 4 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 70590136320 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:2#API:POSIX#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
Began: Mon Oct 22 19:51:43 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 70590136320 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 19:51:43 2018
Summary:
api = POSIX
test filename = /esfs/jtacquaviva/ioperf/file_write
access = file-per-process
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 4 (2 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 65.74 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282360348672.
WARNING: Using actual aggregate bytes moved = 28024242176.
write 203.70 68935680 16.00 0.000505 131.20 0.000238 131.20 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282358808576.
WARNING: Using actual aggregate bytes moved = 26859307008.
write 200.85 68935680 16.00 0.007329 127.53 0.000260 127.53 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282359578624.
WARNING: Using actual aggregate bytes moved = 26201292800.
write 196.89 68935680 16.00 0.000843 126.91 0.000417 126.91 2
Max Write: 203.70 MiB/sec (213.59 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
write 203.70 196.89 200.48 2.79 128.54957 0 4 2 3 1 0 1 0 0 1 70590136320 16384 28024242176 POSIX 0
Finished: Mon Oct 22 19:58:13 2018
+ /opt/ddn/mvapich/bin/mpiexec -ppn 2 -np 4 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 2 -np 4 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 70590136320 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/indread2/file -r
+ tee -a ./output/COUNT:1#NN:2#PPN:2#API:POSIX#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
Began: Mon Oct 22 19:58:21 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 70590136320 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/indread2/file -r
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 19:58:21 2018
Summary:
api = POSIX
test filename = /esfs/jtacquaviva/indread2/file
access = file-per-process
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 4 (2 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 65.74 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 564721090560.
WARNING: Using actual aggregate bytes moved = 812826624.
read 6.45 68935680 16.00 0.002821 120.23 0.002596 120.23 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 564721090560.
WARNING: Using actual aggregate bytes moved = 839647232.
read 6.66 68935680 16.00 0.002560 120.24 0.002601 120.24 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 564721090560.
WARNING: Using actual aggregate bytes moved = 843300864.
read 6.69 68935680 16.00 0.002626 120.23 0.002677 120.23 2
Max Read: 6.69 MiB/sec (7.01 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
read 6.69 6.45 6.60 0.11 120.23531 0 4 2 3 1 0 1 0 0 1 70590136320 16384 812826624 POSIX 0
Finished: Mon Oct 22 20:04:22 2018
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1
/esfs/jtacquaviva/ioperf/file_write.00000002
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 7
obdidx objid objid group
7 12456916 0xbe13d4 0
1 12638191 0xc0d7ef 0
3 12486503 0xbe8767 0
5 12378558 0xbce1be 0
/esfs/jtacquaviva/ioperf/file_write.00000000
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 4
obdidx objid objid group
4 12508275 0xbedc73 0
3 12486502 0xbe8766 0
5 12378557 0xbce1bd 0
0 12420206 0xbd846e 0
/esfs/jtacquaviva/ioperf/file_write.00000003
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 1
obdidx objid objid group
1 12638190 0xc0d7ee 0
6 12591820 0xc022cc 0
2 12566559 0xbfc01f 0
4 12508276 0xbedc74 0
/esfs/jtacquaviva/ioperf/file_write.00000001
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 0
obdidx objid objid group
0 12420207 0xbd846f 0
2 12566560 0xbfc020 0
4 12508277 0xbedc75 0
7 12456917 0xbe13d5 0

View File

@ -0,0 +1,85 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 4 -np 8 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 35295068160 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:4#API:MPIIO#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2
[cli_3]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_4]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4
[cli_5]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5
[cli_6]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 6
[cli_7]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 7
tee: standard outputior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
: Resource temporarily unavailable
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 30192 RUNNING AT isc17-c04
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
tee: write error
+ /opt/ddn/mvapich/bin/mpiexec -ppn 4 -np 8 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 4 -np 8 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 35295068160 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/file_read -r
+ tee -a ./output/COUNT:1#NN:2#PPN:4#API:MPIIO#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_5]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5
[cli_7]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 7
[cli_4]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4
[cli_6]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 6
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2
[cli_3]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 30218 RUNNING AT isc17-c04
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:1@isc17-c05] HYD_pmcd_pmip_control_cmd_cb (pm/pmiserv/pmip_cb.c:912): assert (!closed) failed
[proxy:0:1@isc17-c05] HYDT_dmxu_poll_wait_for_event (tools/demux/demux_poll.c:76): callback returned error status
[proxy:0:1@isc17-c05] main (pm/pmiserv/pmip.c:256): demux engine error waiting for event
[mpiexec@isc17-c04] HYDT_bscu_wait_for_completion (tools/bootstrap/utils/bscu_wait.c:76): one of the processes terminated badly; aborting
[mpiexec@isc17-c04] HYDT_bsci_wait_for_completion (tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting for completion
[mpiexec@isc17-c04] HYD_pmci_wait_for_completion (pm/pmiserv/pmiserv_pmci.c:218): launcher returned error waiting for completion
[mpiexec@isc17-c04] main (ui/mpich/mpiexec.c:344): process manager error waiting for completion
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1

View File

@ -0,0 +1,106 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 4 -np 8 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 35295068160 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:4#API:MPIIO#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior WARNING: fsync() only available in POSIX. Using value of 0.
Began: Mon Oct 22 19:39:28 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 35295068160 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 19:39:28 2018
Summary:
api = MPIIO (version=3, subversion=0)
test filename = /esfs/jtacquaviva/ioperf/file_write
access = single-shared-file
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 8 (4 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 32.87 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282358628352.
WARNING: Using actual aggregate bytes moved = 3452682240.
write 27.17 34467840 16.00 0.009034 121.17 0.000358 121.18 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282359103488.
WARNING: Using actual aggregate bytes moved = 3435954176.
write 27.06 34467840 16.00 0.001303 121.11 0.000463 121.11 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282360119296.
WARNING: Using actual aggregate bytes moved = 3432136704.
write 27.00 34467840 16.00 0.001366 121.25 0.000511 121.25 2
Max Write: 27.17 MiB/sec (28.49 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
write 27.17 27.00 27.07 0.07 121.18149 0 8 4 3 0 0 1 0 0 1 35295068160 16384 3452682240 MPIIO 0
Finished: Mon Oct 22 19:45:33 2018
+ /opt/ddn/mvapich/bin/mpiexec -ppn 4 -np 8 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 4 -np 8 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 35295068160 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/file_read -r
+ tee -a ./output/COUNT:1#NN:2#PPN:4#API:MPIIO#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior WARNING: fsync() only available in POSIX. Using value of 0.
Began: Mon Oct 22 19:45:39 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 35295068160 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/file_read -r
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 19:45:39 2018
Summary:
api = MPIIO (version=3, subversion=0)
test filename = /esfs/jtacquaviva/file_read
access = single-shared-file
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 8 (4 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 32.87 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 2676490240.
read 21.09 34467840 16.00 0.007218 121.03 0.000318 121.04 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 2702786560.
read 21.30 34467840 16.00 0.000894 121.03 0.000365 121.03 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 2729377792.
read 21.51 34467840 16.00 0.000908 121.02 0.000408 121.02 2
Max Read: 21.51 MiB/sec (22.55 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
read 21.51 21.09 21.30 0.17 121.03026 0 8 4 3 0 0 1 0 0 1 35295068160 16384 2676490240 MPIIO 0
Finished: Mon Oct 22 19:51:42 2018
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1
/esfs/jtacquaviva/ioperf/file_write
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 2
obdidx objid objid group
2 12566554 0xbfc01a 0
4 12508271 0xbedc6f 0
7 12456912 0xbe13d0 0
1 12638185 0xc0d7e9 0

View File

@ -0,0 +1,91 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 4 -np 8 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 35295068160 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:4#API:POSIX#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2
[cli_3]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_5]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5
[cli_7]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 7
[cli_4]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4
[cli_6]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 6
tee: standard outputior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
: Resource temporarily unavailable
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 31610 RUNNING AT isc17-c05
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:0@isc17-c04] HYDU_sock_write (utils/sock/sock.c:286): write error (Broken pipe)
[proxy:0:0@isc17-c04] main (pm/pmiserv/pmip.c:265): unable to send EXIT_STATUS command upstream
[mpiexec@isc17-c04] HYDT_bscu_wait_for_completion (tools/bootstrap/utils/bscu_wait.c:76): one of the processes terminated badly; aborting
[mpiexec@isc17-c04] HYDT_bsci_wait_for_completion (tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting for completion
[mpiexec@isc17-c04] HYD_pmci_wait_for_completion (pm/pmiserv/pmiserv_pmci.c:218): launcher returned error waiting for completion
[mpiexec@isc17-c04] main (ui/mpich/mpiexec.c:344): process manager error waiting for completion
tee: write error
+ /opt/ddn/mvapich/bin/mpiexec -ppn 4 -np 8 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 4 -np 8 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 35295068160 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/indread2/file -r
+ tee -a ./output/COUNT:1#NN:2#PPN:4#API:POSIX#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_5]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5
[cli_7]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 7
[cli_4]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4
[cli_6]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 6
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
[cli_3]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 31673 RUNNING AT isc17-c05
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:0@isc17-c04] HYD_pmcd_pmip_control_cmd_cb (pm/pmiserv/pmip_cb.c:912): assert (!closed) failed
[proxy:0:0@isc17-c04] HYDT_dmxu_poll_wait_for_event (tools/demux/demux_poll.c:76): callback returned error status
[proxy:0:0@isc17-c04] main (pm/pmiserv/pmip.c:256): demux engine error waiting for event
[mpiexec@isc17-c04] HYDT_bscu_wait_for_completion (tools/bootstrap/utils/bscu_wait.c:76): one of the processes terminated badly; aborting
[mpiexec@isc17-c04] HYDT_bsci_wait_for_completion (tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting for completion
[mpiexec@isc17-c04] HYD_pmci_wait_for_completion (pm/pmiserv/pmiserv_pmci.c:218): launcher returned error waiting for completion
[mpiexec@isc17-c04] main (ui/mpich/mpiexec.c:344): process manager error waiting for completion
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1

View File

@ -0,0 +1,188 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 4 -np 8 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 35295068160 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:4#API:POSIX#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
Began: Mon Oct 22 19:26:35 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 35295068160 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 19:26:35 2018
Summary:
api = POSIX
test filename = /esfs/jtacquaviva/ioperf/file_write
access = file-per-process
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 8 (4 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 32.87 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282360545280.
WARNING: Using actual aggregate bytes moved = 42085253120.
write 310.04 34467840 16.00 0.000940 129.45 0.000283 129.45 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282358759424.
WARNING: Using actual aggregate bytes moved = 34805186560.
write 250.84 34467840 16.00 0.010233 132.32 0.000290 132.33 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282359365632.
WARNING: Using actual aggregate bytes moved = 37848924160.
write 270.72 34467840 16.00 0.019886 133.31 0.000331 133.33 2
Max Write: 310.04 MiB/sec (325.10 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
write 310.04 250.84 277.20 24.60 131.70394 0 8 4 3 1 0 1 0 0 1 35295068160 16384 42085253120 POSIX 0
Finished: Mon Oct 22 19:33:14 2018
+ /opt/ddn/mvapich/bin/mpiexec -ppn 4 -np 8 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 4 -np 8 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 35295068160 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/indread2/file -r
+ tee -a ./output/COUNT:1#NN:2#PPN:4#API:POSIX#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
Began: Mon Oct 22 19:33:25 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 35295068160 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/indread2/file -r
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 19:33:25 2018
Summary:
api = POSIX
test filename = /esfs/jtacquaviva/indread2/file
access = file-per-process
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 8 (4 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 32.87 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 1129442181120.
WARNING: Using actual aggregate bytes moved = 1890615296.
read 15.01 34467840 16.00 0.000837 120.13 0.000239 120.13 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 1129442181120.
WARNING: Using actual aggregate bytes moved = 2103050240.
read 16.70 34467840 16.00 0.000250 120.12 0.000203 120.12 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 1129442181120.
WARNING: Using actual aggregate bytes moved = 2139176960.
read 16.98 34467840 16.00 0.000236 120.12 0.000190 120.12 2
Max Read: 16.98 MiB/sec (17.81 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
read 16.98 15.01 16.23 0.87 120.12448 0 8 4 3 1 0 1 0 0 1 35295068160 16384 1890615296 POSIX 0
Finished: Mon Oct 22 19:39:26 2018
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1
/esfs/jtacquaviva/ioperf/file_write.00000006
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 2
obdidx objid objid group
2 12566550 0xbfc016 0
4 12508267 0xbedc6b 0
7 12456908 0xbe13cc 0
1 12638181 0xc0d7e5 0
/esfs/jtacquaviva/ioperf/file_write.00000002
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 6
obdidx objid objid group
6 12591812 0xc022c4 0
0 12420199 0xbd8467 0
2 12566552 0xbfc018 0
4 12508269 0xbedc6d 0
/esfs/jtacquaviva/ioperf/file_write.00000000
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 7
obdidx objid objid group
7 12456910 0xbe13ce 0
1 12638183 0xc0d7e7 0
3 12486496 0xbe8760 0
5 12378550 0xbce1b6 0
/esfs/jtacquaviva/ioperf/file_write.00000004
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 0
obdidx objid objid group
0 12420196 0xbd8464 0
2 12566549 0xbfc015 0
4 12508266 0xbedc6a 0
3 12486494 0xbe875e 0
/esfs/jtacquaviva/ioperf/file_write.00000007
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 4
obdidx objid objid group
4 12508268 0xbedc6c 0
7 12456909 0xbe13cd 0
1 12638182 0xc0d7e6 0
3 12486495 0xbe875f 0
/esfs/jtacquaviva/ioperf/file_write.00000003
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 1
obdidx objid objid group
1 12638180 0xc0d7e4 0
5 12378548 0xbce1b4 0
6 12591810 0xc022c2 0
0 12420197 0xbd8465 0
/esfs/jtacquaviva/ioperf/file_write.00000005
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 1
obdidx objid objid group
1 12638179 0xc0d7e3 0
3 12486493 0xbe875d 0
5 12378547 0xbce1b3 0
6 12591809 0xc022c1 0
/esfs/jtacquaviva/ioperf/file_write.00000001
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 5
obdidx objid objid group
5 12378549 0xbce1b5 0
6 12591811 0xc022c3 0
0 12420198 0xbd8466 0
2 12566551 0xbfc017 0

View File

@ -0,0 +1,105 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 6 -np 12 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 23530045440 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:6#API:MPIIO#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_7]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 7
[cli_3]: [cli_9]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 9
[cli_5]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5
[cli_11]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 11
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
[cli_6]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 6
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3
[cli_8]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 8
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2
[cli_10]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 10
[cli_4]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 31487 RUNNING AT isc17-c05
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:0@isc17-c04] HYDU_sock_write (utils/sock/sock.c:286): write error (Broken pipe)
[proxy:0:0@isc17-c04] main (pm/pmiserv/pmip.c:265): unable to send EXIT_STATUS command upstream
+ /opt/ddn/mvapich/bin/mpiexec -ppn 6 -np 12 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 6 -np 12 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 23530045440 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/file_read -r
+ tee -a ./output/COUNT:1#NN:2#PPN:6#API:MPIIO#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_6]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 6
[cli_7]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 7
[cli_8]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 8
[cli_10]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 10
[cli_11]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 11
[cli_9]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 9
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
[cli_5]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2
[cli_3]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3
[cli_4]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 31572 RUNNING AT isc17-c05
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:0@isc17-c04] HYD_pmcd_pmip_control_cmd_cb (pm/pmiserv/pmip_cb.c:912): assert (!closed) failed
[proxy:0:0@isc17-c04] HYDT_dmxu_poll_wait_for_event (tools/demux/demux_poll.c:76): callback returned error status
[proxy:0:0@isc17-c04] main (pm/pmiserv/pmip.c:256): demux engine error waiting for event
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1

View File

@ -0,0 +1,106 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 6 -np 12 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 23530045440 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:6#API:MPIIO#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior WARNING: fsync() only available in POSIX. Using value of 0.
Began: Mon Oct 22 19:14:16 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 23530045440 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 19:14:16 2018
Summary:
api = MPIIO (version=3, subversion=0)
test filename = /esfs/jtacquaviva/ioperf/file_write
access = single-shared-file
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 12 (6 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 21.91 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282357727232.
WARNING: Using actual aggregate bytes moved = 3726131200.
write 29.34 22978560 16.00 0.012315 121.09 0.000733 121.11 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282358185984.
WARNING: Using actual aggregate bytes moved = 3746250752.
write 29.49 22978560 16.00 0.001810 121.14 0.000728 121.14 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282358595584.
WARNING: Using actual aggregate bytes moved = 3746906112.
write 29.44 22978560 16.00 0.002351 121.36 0.000628 121.36 2
Max Write: 29.49 MiB/sec (30.93 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
write 29.49 29.34 29.43 0.06 121.20198 0 12 6 3 0 0 1 0 0 1 23530045440 16384 3726131200 MPIIO 0
Finished: Mon Oct 22 19:20:21 2018
+ /opt/ddn/mvapich/bin/mpiexec -ppn 6 -np 12 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 6 -np 12 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 23530045440 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/file_read -r
+ tee -a ./output/COUNT:1#NN:2#PPN:6#API:MPIIO#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior WARNING: fsync() only available in POSIX. Using value of 0.
Began: Mon Oct 22 19:20:31 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 23530045440 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/file_read -r
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 19:20:31 2018
Summary:
api = MPIIO (version=3, subversion=0)
test filename = /esfs/jtacquaviva/file_read
access = single-shared-file
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 12 (6 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 21.91 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 3857809408.
read 30.41 22978560 16.00 0.005342 120.98 0.000483 120.98 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 3902521344.
read 30.76 22978560 16.00 0.001297 120.98 0.000489 120.99 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 3955720192.
read 31.18 22978560 16.00 0.001311 120.99 0.000473 120.99 2
Max Read: 31.18 MiB/sec (32.69 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
read 31.18 30.41 30.78 0.31 120.98708 0 12 6 3 0 0 1 0 0 1 23530045440 16384 3857809408 MPIIO 0
Finished: Mon Oct 22 19:26:34 2018
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1
/esfs/jtacquaviva/ioperf/file_write
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 6
obdidx objid objid group
6 12591800 0xc022b8 0
0 12420186 0xbd845a 0
2 12566540 0xbfc00c 0
4 12508257 0xbedc61 0

View File

@ -0,0 +1,107 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 6 -np 12 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 23530045440 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:6#API:POSIX#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_4]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_5]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5
[cli_6]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 6
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2
[cli_7]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 7
[cli_3]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_11]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 11
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_8]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 8
[cli_10]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 10
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_9]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 9
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 29916 RUNNING AT isc17-c04
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
+ /opt/ddn/mvapich/bin/mpiexec -ppn 6 -np 12 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 6 -np 12 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 23530045440 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/indread2/file -r
+ tee -a ./output/COUNT:1#NN:2#PPN:6#API:POSIX#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_7]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 7
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_11]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 11
[cli_9]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 9
[cli_6]: [cli_8]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 8
[cli_10]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 10
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 6
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2
[cli_3]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3
[cli_4]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4
[cli_5]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 31445 RUNNING AT isc17-c05
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:0@isc17-c04] HYD_pmcd_pmip_control_cmd_cb (pm/pmiserv/pmip_cb.c:912): assert (!closed) failed
[proxy:0:0@isc17-c04] HYDT_dmxu_poll_wait_for_event (tools/demux/demux_poll.c:76): callback returned error status
[proxy:0:0@isc17-c04] main (pm/pmiserv/pmip.c:256): demux engine error waiting for event
[mpiexec@isc17-c04] HYDT_bscu_wait_for_completion (tools/bootstrap/utils/bscu_wait.c:76): one of the processes terminated badly; aborting
[mpiexec@isc17-c04] HYDT_bsci_wait_for_completion (tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting for completion
[mpiexec@isc17-c04] HYD_pmci_wait_for_completion (pm/pmiserv/pmiserv_pmci.c:218): launcher returned error waiting for completion
[mpiexec@isc17-c04] main (ui/mpich/mpiexec.c:344): process manager error waiting for completion
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1

View File

@ -0,0 +1,236 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 6 -np 12 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 23530045440 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:6#API:POSIX#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
Began: Mon Oct 22 19:01:21 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 23530045440 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 19:01:21 2018
Summary:
api = POSIX
test filename = /esfs/jtacquaviva/ioperf/file_write
access = file-per-process
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 12 (6 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 21.91 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282360152064.
WARNING: Using actual aggregate bytes moved = 42820452352.
write 309.34 22978560 16.00 0.001887 132.01 0.000686 132.01 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282357071872.
WARNING: Using actual aggregate bytes moved = 37654970368.
write 272.36 22978560 16.00 0.002358 131.85 0.000749 131.85 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282359955456.
WARNING: Using actual aggregate bytes moved = 38357450752.
write 275.18 22978560 16.00 0.015756 132.92 0.000867 132.93 2
Max Write: 309.34 MiB/sec (324.36 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
write 309.34 272.36 285.63 16.81 132.26445 0 12 6 3 1 0 1 0 0 1 23530045440 16384 42820452352 POSIX 0
Finished: Mon Oct 22 19:08:02 2018
+ /opt/ddn/mvapich/bin/mpiexec -ppn 6 -np 12 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 6 -np 12 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 23530045440 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/indread2/file -r
+ tee -a ./output/COUNT:1#NN:2#PPN:6#API:POSIX#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
Began: Mon Oct 22 19:08:13 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 23530045440 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/indread2/file -r
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 19:08:13 2018
Summary:
api = POSIX
test filename = /esfs/jtacquaviva/indread2/file
access = file-per-process
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 12 (6 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 21.91 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 1694163271680.
WARNING: Using actual aggregate bytes moved = 3028910080.
read 24.05 22978560 16.00 0.001290 120.09 0.000624 120.09 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 1694163271680.
WARNING: Using actual aggregate bytes moved = 3470229504.
read 27.56 22978560 16.00 0.000626 120.09 0.000720 120.09 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 1694163271680.
WARNING: Using actual aggregate bytes moved = 3558342656.
read 28.26 22978560 16.00 0.000764 120.09 0.000809 120.09 2
Max Read: 28.26 MiB/sec (29.63 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
read 28.26 24.05 26.62 1.84 120.09063 0 12 6 3 1 0 1 0 0 1 23530045440 16384 3028910080 POSIX 0
Finished: Mon Oct 22 19:14:13 2018
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1
/esfs/jtacquaviva/ioperf/file_write.00000006
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 4
obdidx objid objid group
4 12508253 0xbedc5d 0
7 12456894 0xbe13be 0
1 12638167 0xc0d7d7 0
3 12486479 0xbe874f 0
/esfs/jtacquaviva/ioperf/file_write.00000002
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 3
obdidx objid objid group
3 12486478 0xbe874e 0
5 12378532 0xbce1a4 0
6 12591794 0xc022b2 0
0 12420180 0xbd8454 0
/esfs/jtacquaviva/ioperf/file_write.00000000
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 2
obdidx objid objid group
2 12566538 0xbfc00a 0
4 12508256 0xbedc60 0
7 12456897 0xbe13c1 0
1 12638170 0xc0d7da 0
/esfs/jtacquaviva/ioperf/file_write.00000004
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 3
obdidx objid objid group
3 12486482 0xbe8752 0
5 12378536 0xbce1a8 0
6 12591798 0xc022b6 0
0 12420184 0xbd8458 0
/esfs/jtacquaviva/ioperf/file_write.00000007
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 7
obdidx objid objid group
7 12456895 0xbe13bf 0
1 12638168 0xc0d7d8 0
3 12486480 0xbe8750 0
5 12378534 0xbce1a6 0
/esfs/jtacquaviva/ioperf/file_write.00000003
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 5
obdidx objid objid group
5 12378533 0xbce1a5 0
6 12591795 0xc022b3 0
0 12420181 0xbd8455 0
2 12566535 0xbfc007 0
/esfs/jtacquaviva/ioperf/file_write.00000008
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 0
obdidx objid objid group
0 12420179 0xbd8453 0
2 12566533 0xbfc005 0
4 12508251 0xbedc5b 0
7 12456892 0xbe13bc 0
/esfs/jtacquaviva/ioperf/file_write.00000010
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 2
obdidx objid objid group
2 12566534 0xbfc006 0
4 12508252 0xbedc5c 0
7 12456893 0xbe13bd 0
1 12638166 0xc0d7d6 0
/esfs/jtacquaviva/ioperf/file_write.00000005
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 0
obdidx objid objid group
0 12420183 0xbd8457 0
2 12566537 0xbfc009 0
4 12508255 0xbedc5f 0
7 12456896 0xbe13c0 0
/esfs/jtacquaviva/ioperf/file_write.00000009
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 6
obdidx objid objid group
6 12591796 0xc022b4 0
0 12420182 0xbd8456 0
2 12566536 0xbfc008 0
4 12508254 0xbedc5e 0
/esfs/jtacquaviva/ioperf/file_write.00000001
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 1
obdidx objid objid group
1 12638165 0xc0d7d5 0
3 12486477 0xbe874d 0
5 12378531 0xbce1a3 0
6 12591793 0xc022b1 0
/esfs/jtacquaviva/ioperf/file_write.00000011
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 1
obdidx objid objid group
1 12638169 0xc0d7d9 0
3 12486481 0xbe8751 0
5 12378535 0xbce1a7 0
6 12591797 0xc022b5 0

View File

@ -0,0 +1,128 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 8 -np 16 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 17647534080 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:8#API:MPIIO#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_8]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 8
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_10]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 10
[cli_12]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 12
[cli_14]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 14
[cli_9]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 9
[cli_15]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 15
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_11]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 11
[cli_13]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 13
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
[cli_3]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3
[cli_4]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4
[cli_5]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5
[cli_6]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 6
[cli_7]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 7
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 31253 RUNNING AT isc17-c05
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:0@isc17-c04] HYDU_sock_write (utils/sock/sock.c:286): write error (Broken pipe)
[proxy:0:0@isc17-c04] main (pm/pmiserv/pmip.c:265): unable to send EXIT_STATUS command upstream
+ /opt/ddn/mvapich/bin/mpiexec -ppn 8 -np 16 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 8 -np 16 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 17647534080 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/file_read -r
+ tee -a ./output/COUNT:1#NN:2#PPN:8#API:MPIIO#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_8]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 8
[cli_9]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 9
[cli_10]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 10
[cli_11]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 11
[cli_12]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 12
[cli_13]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 13
[cli_14]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 14
[cli_15]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 15
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
[cli_6]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 6
[cli_7]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 7
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2
[cli_3]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3
[cli_5]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5
[cli_4]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 31334 RUNNING AT isc17-c05
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:0@isc17-c04] HYDU_sock_write (utils/sock/sock.c:286): write error (Broken pipe)
[proxy:0:0@isc17-c04] main (pm/pmiserv/pmip.c:265): unable to send EXIT_STATUS command upstream
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1

View File

@ -0,0 +1,106 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 8 -np 16 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 17647534080 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:8#API:MPIIO#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior WARNING: fsync() only available in POSIX. Using value of 0.
Began: Mon Oct 22 18:49:04 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 17647534080 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 18:49:04 2018
Summary:
api = MPIIO (version=3, subversion=0)
test filename = /esfs/jtacquaviva/ioperf/file_write
access = single-shared-file
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 16 (8 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 16.44 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282359676928.
WARNING: Using actual aggregate bytes moved = 4080975872.
write 32.14 17233920 16.00 0.011236 121.08 0.000746 121.10 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282359775232.
WARNING: Using actual aggregate bytes moved = 4095754240.
write 32.24 17233920 16.00 0.001386 121.15 0.000316 121.15 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282359578624.
WARNING: Using actual aggregate bytes moved = 4058431488.
write 31.91 17233920 16.00 0.035547 121.24 0.000427 121.28 2
Max Write: 32.24 MiB/sec (33.81 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
write 32.24 31.91 32.10 0.14 121.17587 0 16 8 3 0 0 1 0 0 1 17647534080 16384 4080975872 MPIIO 0
Finished: Mon Oct 22 18:55:09 2018
+ /opt/ddn/mvapich/bin/mpiexec -ppn 8 -np 16 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 8 -np 16 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 17647534080 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/file_read -r
+ tee -a ./output/COUNT:1#NN:2#PPN:8#API:MPIIO#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior WARNING: fsync() only available in POSIX. Using value of 0.
Began: Mon Oct 22 18:55:16 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 17647534080 -D 120 -a MPIIO -e -g -z -k -o /esfs/jtacquaviva/file_read -r
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 18:55:16 2018
Summary:
api = MPIIO (version=3, subversion=0)
test filename = /esfs/jtacquaviva/file_read
access = single-shared-file
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 16 (8 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 16.44 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 4975853568.
read 39.23 17233920 16.00 0.009074 120.96 0.000317 120.97 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 5067522048.
read 39.95 17233920 16.00 0.001254 120.98 0.003315 120.98 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 4517768724480.
WARNING: Using actual aggregate bytes moved = 5162123264.
read 40.70 17233920 16.00 0.001256 120.97 0.000292 120.97 2
Max Read: 40.70 MiB/sec (42.67 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
read 40.70 39.23 39.96 0.60 120.97372 0 16 8 3 0 0 1 0 0 1 17647534080 16384 4975853568 MPIIO 0
Finished: Mon Oct 22 19:01:19 2018
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1
/esfs/jtacquaviva/ioperf/file_write
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 6
obdidx objid objid group
6 12591780 0xc022a4 0
0 12420166 0xbd8446 0
2 12566520 0xbfbff8 0
4 12508237 0xbedc4d 0

View File

@ -0,0 +1,132 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 8 -np 16 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 17647534080 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:8#API:POSIX#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
[cli_3]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3
[cli_6]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 6
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2
[cli_4]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4
[cli_5]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5
[cli_7]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 7
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_8]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 8
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_9]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 9
[cli_10]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 10
[cli_11]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 11
[cli_12]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 12
[cli_13]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 13
[cli_14]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 14
[cli_15]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 15
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 31097 RUNNING AT isc17-c05
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:0@isc17-c04] HYDU_sock_write (utils/sock/sock.c:286): write error (Broken pipe)
[proxy:0:0@isc17-c04] main (pm/pmiserv/pmip.c:265): unable to send EXIT_STATUS command upstream
[mpiexec@isc17-c04] HYDT_bscu_wait_for_completion (tools/bootstrap/utils/bscu_wait.c:76): one of the processes terminated badly; aborting
[mpiexec@isc17-c04] HYDT_bsci_wait_for_completion (tools/bootstrap/src/bsci_wait.c:23): launcher returned error waiting for completion
[mpiexec@isc17-c04] HYD_pmci_wait_for_completion (pm/pmiserv/pmiserv_pmci.c:218): launcher returned error waiting for completion
[mpiexec@isc17-c04] main (ui/mpich/mpiexec.c:344): process manager error waiting for completion
+ /opt/ddn/mvapich/bin/mpiexec -ppn 8 -np 16 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 8 -np 16 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 102400 -b 17647534080 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/indread2/file -r
+ tee -a ./output/COUNT:1#NN:2#PPN:8#API:POSIX#T:102400.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
ior ERROR: block size must be a multiple of transfer size, errno 2, No such file or directory (ior.c:2293)
[cli_8]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 8
ior ERROR: block size must be a multiple of transfer size, errno 0, Success (ior.c:2293)
[cli_9]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 9
[cli_10]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 10
[cli_11]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 11
[cli_12]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 12
[cli_13]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 13
[cli_14]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 14
[cli_15]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 15
[cli_1]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 1
[cli_2]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 2
[cli_3]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 3
[cli_4]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 4
[cli_5]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 5
[cli_6]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 6
[cli_7]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 7
[cli_0]: aborting job:
application called MPI_Abort(MPI_COMM_WORLD, -1) - process 0
===================================================================================
= BAD TERMINATION OF ONE OF YOUR APPLICATION PROCESSES
= PID 31211 RUNNING AT isc17-c05
= EXIT CODE: 255
= CLEANING UP REMAINING PROCESSES
= YOU CAN IGNORE THE BELOW CLEANUP MESSAGES
===================================================================================
[proxy:0:0@isc17-c04] HYDU_sock_write (utils/sock/sock.c:286): write error (Broken pipe)
[proxy:0:0@isc17-c04] main (pm/pmiserv/pmip.c:265): unable to send EXIT_STATUS command upstream
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1

View File

@ -0,0 +1,284 @@
+ /opt/ddn/mvapich/bin/mpiexec -ppn 8 -np 16 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 17647534080 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
+ tee -a ./output/COUNT:1#NN:2#PPN:8#API:POSIX#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
Began: Mon Oct 22 18:36:05 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 17647534080 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/ioperf/file_write -w
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 18:36:05 2018
Summary:
api = POSIX
test filename = /esfs/jtacquaviva/ioperf/file_write
access = file-per-process
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 16 (8 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 16.44 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282360545280.
WARNING: Using actual aggregate bytes moved = 49571840000.
write 358.18 17233920 16.00 0.001712 131.99 0.000277 131.99 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282359758848.
WARNING: Using actual aggregate bytes moved = 38581731328.
write 278.06 17233920 16.00 0.002477 132.32 0.000384 132.33 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 282359234560.
WARNING: Using actual aggregate bytes moved = 37216092160.
write 264.46 17233920 16.00 0.010438 134.20 0.000421 134.21 2
Max Write: 358.18 MiB/sec (375.57 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
write 358.18 264.46 300.23 41.35 132.84150 0 16 8 3 1 0 1 0 0 1 17647534080 16384 49571840000 POSIX 0
Finished: Mon Oct 22 18:42:48 2018
+ /opt/ddn/mvapich/bin/mpiexec -ppn 8 -np 16 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/git/ime-evaluation/drop_caches.sh
+ /opt/ddn/mvapich/bin/mpiexec -ppn 8 -np 16 -genv MV2_NUM_HCAS 1 -genv MV2_CPU_BINDING_LEVEL core -genv MV2_CPU_BINDING_POLICY scatter --hosts isc17-c04,isc17-c05 /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 17647534080 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/indread2/file -r
+ tee -a ./output/COUNT:1#NN:2#PPN:8#API:POSIX#T:16384.txt
IOR-3.0.1: MPI Coordinated Test of Parallel I/O
Began: Mon Oct 22 18:43:00 2018
Command line used: /esfs/jtacquaviva/software/install/ior/git-ddn/bin/ior -i 3 -s 1 -t 16384 -b 17647534080 -D 120 -a POSIX -F -e -g -z -k -o /esfs/jtacquaviva/indread2/file -r
Machine: Linux isc17-c04
Test 0 started: Mon Oct 22 18:43:00 2018
Summary:
api = POSIX
test filename = /esfs/jtacquaviva/indread2/file
access = file-per-process
ordering in a file = random offsets
ordering inter file= no tasks offsets
clients = 16 (8 per node)
repetitions = 3
xfersize = 16384 bytes
blocksize = 16.44 GiB
aggregate filesize = 262.97 GiB
Using stonewalling = 120 second(s)
access bw(MiB/s) block(KiB) xfer(KiB) open(s) wr/rd(s) close(s) total(s) iter
------ --------- ---------- --------- -------- -------- -------- -------- ----
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 2258884362240.
WARNING: Using actual aggregate bytes moved = 4385964032.
read 34.84 17233920 16.00 0.001482 120.07 0.000447 120.07 0
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 2258884362240.
WARNING: Using actual aggregate bytes moved = 4767203328.
read 37.86 17233920 16.00 0.001426 120.07 0.004977 120.08 1
WARNING: Expected aggregate file size = 282360545280.
WARNING: Stat() of aggregate file size = 2258884362240.
WARNING: Using actual aggregate bytes moved = 4897538048.
read 38.90 17233920 16.00 0.000494 120.07 0.005710 120.07 2
Max Read: 38.90 MiB/sec (40.79 MB/sec)
Summary of all tests:
Operation Max(MiB) Min(MiB) Mean(MiB) StdDev Mean(s) Test# #Tasks tPN reps fPP reord reordoff reordrand seed segcnt blksiz xsize aggsize API RefNum
read 38.90 34.84 37.20 1.72 120.07329 0 16 8 3 1 0 1 0 0 1 17647534080 16384 4385964032 POSIX 0
Finished: Mon Oct 22 18:49:01 2018
+ set +x
/esfs/jtacquaviva/ioperf
stripe_count: 4 stripe_size: 1048576 stripe_offset: -1
/esfs/jtacquaviva/ioperf/file_write.00000006
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 2
obdidx objid objid group
2 12566515 0xbfbff3 0
4 12508233 0xbedc49 0
7 12456874 0xbe13aa 0
1 12638147 0xc0d7c3 0
/esfs/jtacquaviva/ioperf/file_write.00000002
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 0
obdidx objid objid group
0 12420160 0xbd8440 0
2 12566514 0xbfbff2 0
4 12508232 0xbedc48 0
7 12456873 0xbe13a9 0
/esfs/jtacquaviva/ioperf/file_write.00000000
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 6
obdidx objid objid group
6 12591773 0xc0229d 0
0 12420159 0xbd843f 0
2 12566513 0xbfbff1 0
4 12508231 0xbedc47 0
/esfs/jtacquaviva/ioperf/file_write.00000004
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 2
obdidx objid objid group
2 12566511 0xbfbfef 0
4 12508229 0xbedc45 0
7 12456870 0xbe13a6 0
1 12638143 0xc0d7bf 0
/esfs/jtacquaviva/ioperf/file_write.00000007
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 7
obdidx objid objid group
7 12456876 0xbe13ac 0
1 12638149 0xc0d7c5 0
3 12486461 0xbe873d 0
5 12378515 0xbce193 0
/esfs/jtacquaviva/ioperf/file_write.00000012
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 7
obdidx objid objid group
7 12456872 0xbe13a8 0
1 12638145 0xc0d7c1 0
3 12486457 0xbe8739 0
5 12378511 0xbce18f 0
/esfs/jtacquaviva/ioperf/file_write.00000013
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 5
obdidx objid objid group
5 12378510 0xbce18e 0
6 12591772 0xc0229c 0
0 12420158 0xbd843e 0
2 12566512 0xbfbff0 0
/esfs/jtacquaviva/ioperf/file_write.00000003
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 1
obdidx objid objid group
1 12638150 0xc0d7c6 0
3 12486462 0xbe873e 0
5 12378516 0xbce194 0
6 12591778 0xc022a2 0
/esfs/jtacquaviva/ioperf/file_write.00000008
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 5
obdidx objid objid group
5 12378514 0xbce192 0
6 12591776 0xc022a0 0
0 12420162 0xbd8442 0
2 12566516 0xbfbff4 0
/esfs/jtacquaviva/ioperf/file_write.00000010
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 6
obdidx objid objid group
6 12591777 0xc022a1 0
0 12420163 0xbd8443 0
2 12566517 0xbfbff5 0
4 12508235 0xbedc4b 0
/esfs/jtacquaviva/ioperf/file_write.00000005
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 4
obdidx objid objid group
4 12508234 0xbedc4a 0
7 12456875 0xbe13ab 0
1 12638148 0xc0d7c4 0
3 12486460 0xbe873c 0
/esfs/jtacquaviva/ioperf/file_write.00000014
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 3
obdidx objid objid group
3 12486459 0xbe873b 0
5 12378513 0xbce191 0
6 12591775 0xc0229f 0
0 12420161 0xbd8441 0
/esfs/jtacquaviva/ioperf/file_write.00000009
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 4
obdidx objid objid group
4 12508230 0xbedc46 0
7 12456871 0xbe13a7 0
1 12638144 0xc0d7c0 0
3 12486456 0xbe8738 0
/esfs/jtacquaviva/ioperf/file_write.00000015
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 3
obdidx objid objid group
3 12486455 0xbe8737 0
5 12378509 0xbce18d 0
6 12591771 0xc0229b 0
0 12420157 0xbd843d 0
/esfs/jtacquaviva/ioperf/file_write.00000001
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 0
obdidx objid objid group
0 12420164 0xbd8444 0
2 12566518 0xbfbff6 0
4 12508236 0xbedc4c 0
7 12456877 0xbe13ad 0
/esfs/jtacquaviva/ioperf/file_write.00000011
lmm_stripe_count: 4
lmm_stripe_size: 1048576
lmm_pattern: 1
lmm_layout_gen: 0
lmm_stripe_offset: 1
obdidx objid objid group
1 12638146 0xc0d7c2 0
3 12486458 0xbe873a 0
5 12378512 0xbce190 0
6 12591774 0xc0229e 0