master
Eugen Betke 2018-10-24 17:13:05 +02:00
parent db075f246b
commit 21fdfa9ebb
30 changed files with 12398 additions and 0 deletions

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 449 KiB

Binary file not shown.

Binary file not shown.

Binary file not shown.

After

Width:  |  Height:  |  Size: 118 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 297 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 203 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 340 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 177 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

View File

@ -0,0 +1,88 @@
%%
%% This is file `aliascnt.sty',
%% generated with the docstrip utility.
%%
%% The original source files were:
%%
%% aliascnt.dtx (with options: `package')
%%
%% This is a generated file.
%%
%% Project: aliascnt
%% Version: 2009/09/08 v1.3
%%
%% Copyright (C) 2006, 2009 by
%% Heiko Oberdiek <heiko.oberdiek at googlemail.com>
%%
%% This work may be distributed and/or modified under the
%% conditions of the LaTeX Project Public License, either
%% version 1.3c of this license or (at your option) any later
%% version. This version of this license is in
%% http://www.latex-project.org/lppl/lppl-1-3c.txt
%% and the latest version of this license is in
%% http://www.latex-project.org/lppl.txt
%% and version 1.3 or later is part of all distributions of
%% LaTeX version 2005/12/01 or later.
%%
%% This work has the LPPL maintenance status "maintained".
%%
%% This Current Maintainer of this work is Heiko Oberdiek.
%%
%% This work consists of the main source file aliascnt.dtx
%% and the derived files
%% aliascnt.sty, aliascnt.pdf, aliascnt.ins, aliascnt.drv.
%%
\NeedsTeXFormat{LaTeX2e}
\ProvidesPackage{aliascnt}%
[2009/09/08 v1.3 Alias counter (HO)]%
\newcommand*{\newaliascnt}[2]{%
\begingroup
\def\AC@glet##1{%
\global\expandafter\let\csname##1#1\expandafter\endcsname
\csname##1#2\endcsname
}%
\@ifundefined{c@#2}{%
\@nocounterr{#2}%
}{%
\expandafter\@ifdefinable\csname c@#1\endcsname{%
\AC@glet{c@}%
\AC@glet{the}%
\AC@glet{theH}%
\AC@glet{p@}%
\expandafter\gdef\csname AC@cnt@#1\endcsname{#2}%
\expandafter\gdef\csname cl@#1\expandafter\endcsname
\expandafter{\csname cl@#2\endcsname}%
}%
}%
\endgroup
}
\newcommand*{\aliascntresetthe}[1]{%
\@ifundefined{AC@cnt@#1}{%
\PackageError{aliascnt}{%
`#1' is not an alias counter%
}\@ehc
}{%
\expandafter\let\csname the#1\expandafter\endcsname
\csname the\csname AC@cnt@#1\endcsname\endcsname
}%
}
\newcommand*{\AC@findrootcnt}[1]{%
\@ifundefined{AC@cnt@#1}{%
#1%
}{%
\expandafter\AC@findrootcnt\csname AC@cnt@#1\endcsname
}%
}
\def\AC@patch#1{%
\expandafter\let\csname AC@org@#1reset\expandafter\endcsname
\csname @#1reset\endcsname
\expandafter\def\csname @#1reset\endcsname##1##2{%
\csname AC@org@#1reset\endcsname{##1}{\AC@findrootcnt{##2}}%
}%
}
\RequirePackage{remreset}
\AC@patch{addto}
\AC@patch{removefrom}
\endinput
%%
%% End of file `aliascnt.sty'.

View File

@ -0,0 +1,129 @@
Version history for the LLNCS LaTeX2e class
date filename version action/reason/acknowledgements
----------------------------------------------------------------------------
29.5.96 letter.txt beta naming problems (subject index file)
thanks to Dr. Martin Held, Salzburg, AT
subjindx.ind renamed to subjidx.ind as required
by llncs.dem
history.txt introducing this file
30.5.96 llncs.cls incompatibility with new article.cls of
1995/12/20 v1.3q Standard LaTeX document class,
\if@openbib is no longer defined,
reported by Ralf Heckmann and Graham Gough
solution by David Carlisle
10.6.96 llncs.cls problems with fragile commands in \author field
reported by Michael Gschwind, TU Wien
25.7.96 llncs.cls revision a corrects:
wrong size of text area, floats not \small,
some LaTeX generated texts
reported by Michael Sperber, Uni Tuebingen
16.4.97 all files 2.1 leaving beta state,
raising version counter to 2.1
8.6.97 llncs.cls 2.1a revision a corrects:
unbreakable citation lists, reported by
Sergio Antoy of Portland State University
11.12.97 llncs.cls 2.2 "general" headings centered; two new elements
for the article header: \email and \homedir;
complete revision of special environments:
\newtheorem replaced with \spnewtheorem,
introduced the theopargself environment;
two column parts made with multicol package;
add ons to work with the hyperref package
07.01.98 llncs.cls 2.2 changed \email to simply switch to \tt
25.03.98 llncs.cls 2.3 new class option "oribibl" to suppress
changes to the thebibliograpy environment
and retain pure LaTeX codes - useful
for most BibTeX applications
16.04.98 llncs.cls 2.3 if option "oribibl" is given, extend the
thebibliograpy hook with "\small", suggested
by Clemens Ballarin, University of Cambridge
20.11.98 llncs.cls 2.4 pagestyle "titlepage" - useful for
compilation of whole LNCS volumes
12.01.99 llncs.cls 2.5 counters of orthogonal numbered special
environments are reset each new contribution
27.04.99 llncs.cls 2.6 new command \thisbottomragged for the
actual page; indention of the footnote
made variable with \fnindent (default 1em);
new command \url that copys its argument
2.03.00 llncs.cls 2.7 \figurename and \tablename made compatible
to babel, suggested by Jo Hereth, TU Darmstadt;
definition of \url moved \AtBeginDocument
(allows for url package of Donald Arseneau),
suggested by Manfred Hauswirth, TU of Vienna;
\large for part entries in the TOC
16.04.00 llncs.cls 2.8 new option "orivec" to preserve the original
vector definition, read "arrow" accent
17.01.01 llncs.cls 2.9 hardwired texts made polyglot,
available languages: english (default),
french, german - all are "babel-proof"
20.06.01 splncs.bst public release of a BibTeX style for LNCS,
nobly provided by Jason Noble
14.08.01 llncs.cls 2.10 TOC: authors flushleft,
entries without hyphenation; suggested
by Wiro Niessen, Imaging Center - Utrecht
23.01.02 llncs.cls 2.11 fixed footnote number confusion with
\thanks, numbered institutes, and normal
footnote entries; error reported by
Saverio Cittadini, Istituto Tecnico
Industriale "Tito Sarrocchi" - Siena
28.01.02 llncs.cls 2.12 fixed footnote fix ; error reported by
Chris Mesterharm, CS Dept. Rutgers - NJ
28.01.02 llncs.cls 2.13 fixed the fix (programmer needs vacation)
17.08.04 llncs.cls 2.14 TOC: authors indented, smart \and handling
for the TOC suggested by Thomas Gabel
University of Osnabrueck
07.03.06 splncs.bst fix for BibTeX entries without year; patch
provided by Jerry James, Utah State University
14.06.06 splncs_srt.bst a sorting BibTeX style for LNCS, feature
provided by Tobias Heindel, FMI Uni-Stuttgart
16.10.06 llncs.dem 2.3 removed affiliations from \tocauthor demo
11.12.07 llncs.doc note on online visibility of given e-mail address
15.06.09 splncs03.bst new BibTeX style compliant with the current
requirements, provided by Maurizio "Titto"
Patrignani of Universita' Roma Tre
30.03.10 llncs.cls 2.15 fixed broken hyperref interoperability;
patch provided by Sven Koehler,
Hamburg University of Technology
15.04.10 llncs.cls 2.16 fixed hyperref warning for informatory TOC entries;
introduced \keywords command - finally;
blank removed from \keywordname, flaw reported
by Armin B. Wagner, IGW TU Vienna
15.04.10 llncs.cls 2.17 fixed missing switch "openright" used by \backmatter;
flaw reported by Tobias Pape, University of Potsdam
27.09.13 llncs.cls 2.18 fixed "ngerman" incompatibility; solution provided
by Bastian Pfleging, University of Stuttgart
04.09.17 llncs.cls 2.19 introduced \orcidID command

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Binary file not shown.

View File

@ -0,0 +1,349 @@
% This is LLNCS.IND the handmade demonstration
% file for an author index from Springer-Verlag
% for Lecture Notes in Computer Science,
% version 2.2 for LaTeX2e
%
\begin{theindex}
\item Abt~I. \idxquad{7}
\item Ahmed~T. \idxquad{3}
\item Andreev~V. \idxquad{24}
\item Andrieu~B. \idxquad{27}
\item Arpagaus~M. \idxquad{34}
\indexspace
\item Babaev~A. \idxquad{25}
\item B\"arwolff~A. \idxquad{33}
\item B\'an~J. \idxquad{17}
\item Baranov~P. \idxquad{24}
\item Barrelet~E. \idxquad{28}
\item Bartel~W. \idxquad{11}
\item Bassler~U. \idxquad{28}
\item Beck~H.P. \idxquad{35}
\item Behrend~H.-J. \idxquad{11}
\item Berger~Ch. \idxquad{1}
\item Bergstein~H. \idxquad{1}
\item Bernardi~G. \idxquad{28}
\item Bernet~R. \idxquad{34}
\item Besan\c con~M. \idxquad{9}
\item Biddulph~P. \idxquad{22}
\item Binder~E. \idxquad{11}
\item Bischoff~A. \idxquad{33}
\item Blobel~V. \idxquad{13}
\item Borras~K. \idxquad{8}
\item Bosetti~P.C. \idxquad{2}
\item Boudry~V. \idxquad{27}
\item Brasse~F. \idxquad{11}
\item Braun~U. \idxquad{2}
\item Braunschweig~A. \idxquad{1}
\item Brisson~V. \idxquad{26}
\item B\"ungener~L. \idxquad{13}
\item B\"urger~J. \idxquad{11}
\item B\"usser~F.W. \idxquad{13}
\item Buniatian~A. \idxquad{11,37}
\item Buschhorn~G. \idxquad{25}
\indexspace
\item Campbell~A.J. \idxquad{1}
\item Carli~T. \idxquad{25}
\item Charles~F. \idxquad{28}
\item Clarke~D. \idxquad{5}
\item Clegg~A.B. \idxquad{18}
\item Colombo~M. \idxquad{8}
\item Courau~A. \idxquad{26}
\item Coutures~Ch. \idxquad{9}
\item Cozzika~G. \idxquad{9}
\item Criegee~L. \idxquad{11}
\item Cvach~J. \idxquad{27}
\indexspace
\item Dagoret~S. \idxquad{28}
\item Dainton~J.B. \idxquad{19}
\item Dann~A.W.E. \idxquad{22}
\item Dau~W.D. \idxquad{16}
\item Deffur~E. \idxquad{11}
\item Delcourt~B. \idxquad{26}
\item Buono~Del~A. \idxquad{28}
\item Devel~M. \idxquad{26}
\item De Roeck~A. \idxquad{11}
\item Dingus~P. \idxquad{27}
\item Dollfus~C. \idxquad{35}
\item Dreis~H.B. \idxquad{2}
\item Drescher~A. \idxquad{8}
\item D\"ullmann~D. \idxquad{13}
\item D\"unger~O. \idxquad{13}
\item Duhm~H. \idxquad{12}
\indexspace
\item Ebbinghaus~R. \idxquad{8}
\item Eberle~M. \idxquad{12}
\item Ebert~J. \idxquad{32}
\item Ebert~T.R. \idxquad{19}
\item Efremenko~V. \idxquad{23}
\item Egli~S. \idxquad{35}
\item Eichenberger~S. \idxquad{35}
\item Eichler~R. \idxquad{34}
\item Eisenhandler~E. \idxquad{20}
\item Ellis~N.N. \idxquad{3}
\item Ellison~R.J. \idxquad{22}
\item Elsen~E. \idxquad{11}
\item Evrard~E. \idxquad{4}
\indexspace
\item Favart~L. \idxquad{4}
\item Feeken~D. \idxquad{13}
\item Felst~R. \idxquad{11}
\item Feltesse~A. \idxquad{9}
\item Fensome~I.F. \idxquad{3}
\item Ferrarotto~F. \idxquad{31}
\item Flamm~K. \idxquad{11}
\item Flauger~W. \idxquad{11}
\item Flieser~M. \idxquad{25}
\item Fl\"ugge~G. \idxquad{2}
\item Fomenko~A. \idxquad{24}
\item Fominykh~B. \idxquad{23}
\item Form\'anek~J. \idxquad{30}
\item Foster~J.M. \idxquad{22}
\item Franke~G. \idxquad{11}
\item Fretwurst~E. \idxquad{12}
\indexspace
\item Gabathuler~E. \idxquad{19}
\item Gamerdinger~K. \idxquad{25}
\item Garvey~J. \idxquad{3}
\item Gayler~J. \idxquad{11}
\item Gellrich~A. \idxquad{13}
\item Gennis~M. \idxquad{11}
\item Genzel~H. \idxquad{1}
\item Godfrey~L. \idxquad{7}
\item Goerlach~U. \idxquad{11}
\item Goerlich~L. \idxquad{6}
\item Gogitidze~N. \idxquad{24}
\item Goodall~A.M. \idxquad{19}
\item Gorelov~I. \idxquad{23}
\item Goritchev~P. \idxquad{23}
\item Grab~C. \idxquad{34}
\item Gr\"assler~R. \idxquad{2}
\item Greenshaw~T. \idxquad{19}
\item Greif~H. \idxquad{25}
\item Grindhammer~G. \idxquad{25}
\indexspace
\item Haack~J. \idxquad{33}
\item Haidt~D. \idxquad{11}
\item Hamon~O. \idxquad{28}
\item Handschuh~D. \idxquad{11}
\item Hanlon~E.M. \idxquad{18}
\item Hapke~M. \idxquad{11}
\item Harjes~J. \idxquad{11}
\item Haydar~R. \idxquad{26}
\item Haynes~W.J. \idxquad{5}
\item Hedberg~V. \idxquad{21}
\item Heinzelmann~G. \idxquad{13}
\item Henderson~R.C.W. \idxquad{18}
\item Henschel~H. \idxquad{33}
\item Herynek~I. \idxquad{29}
\item Hildesheim~W. \idxquad{11}
\item Hill~P. \idxquad{11}
\item Hilton~C.D. \idxquad{22}
\item Hoeger~K.C. \idxquad{22}
\item Huet~Ph. \idxquad{4}
\item Hufnagel~H. \idxquad{14}
\item Huot~N. \idxquad{28}
\indexspace
\item Itterbeck~H. \idxquad{1}
\indexspace
\item Jabiol~M.-A. \idxquad{9}
\item Jacholkowska~A. \idxquad{26}
\item Jacobsson~C. \idxquad{21}
\item Jansen~T. \idxquad{11}
\item J\"onsson~L. \idxquad{21}
\item Johannsen~A. \idxquad{13}
\item Johnson~D.P. \idxquad{4}
\item Jung~H. \idxquad{2}
\indexspace
\item Kalmus~P.I.P. \idxquad{20}
\item Kasarian~S. \idxquad{11}
\item Kaschowitz~R. \idxquad{2}
\item Kathage~U. \idxquad{16}
\item Kaufmann~H. \idxquad{33}
\item Kenyon~I.R. \idxquad{3}
\item Kermiche~S. \idxquad{26}
\item Kiesling~C. \idxquad{25}
\item Klein~M. \idxquad{33}
\item Kleinwort~C. \idxquad{13}
\item Knies~G. \idxquad{11}
\item Ko~W. \idxquad{7}
\item K\"ohler~T. \idxquad{1}
\item Kolanoski~H. \idxquad{8}
\item Kole~F. \idxquad{7}
\item Kolya~S.D. \idxquad{22}
\item Korbel~V. \idxquad{11}
\item Korn~M. \idxquad{8}
\item Kostka~P. \idxquad{33}
\item Kotelnikov~S.K. \idxquad{24}
\item Krehbiel~H. \idxquad{11}
\item Kr\"ucker~D. \idxquad{2}
\item Kr\"uger~U. \idxquad{11}
\item Kubenka~J.P. \idxquad{25}
\item Kuhlen~M. \idxquad{25}
\item Kur\v{c}a~T. \idxquad{17}
\item Kurzh\"ofer~J. \idxquad{8}
\item Kuznik~B. \idxquad{32}
\indexspace
\item Lamarche~F. \idxquad{27}
\item Lander~R. \idxquad{7}
\item Landon~M.P.J. \idxquad{20}
\item Lange~W. \idxquad{33}
\item Lanius~P. \idxquad{25}
\item Laporte~J.F. \idxquad{9}
\item Lebedev~A. \idxquad{24}
\item Leuschner~A. \idxquad{11}
\item Levonian~S. \idxquad{11,24}
\item Lewin~D. \idxquad{11}
\item Ley~Ch. \idxquad{2}
\item Lindner~A. \idxquad{8}
\item Lindstr\"om~G. \idxquad{12}
\item Linsel~F. \idxquad{11}
\item Lipinski~J. \idxquad{13}
\item Loch~P. \idxquad{11}
\item Lohmander~H. \idxquad{21}
\item Lopez~G.C. \idxquad{20}
\indexspace
\item Magnussen~N. \idxquad{32}
\item Mani~S. \idxquad{7}
\item Marage~P. \idxquad{4}
\item Marshall~R. \idxquad{22}
\item Martens~J. \idxquad{32}
\item Martin~A.@ \idxquad{19}
\item Martyn~H.-U. \idxquad{1}
\item Martyniak~J. \idxquad{6}
\item Masson~S. \idxquad{2}
\item Mavroidis~A. \idxquad{20}
\item McMahon~S.J. \idxquad{19}
\item Mehta~A. \idxquad{22}
\item Meier~K. \idxquad{15}
\item Mercer~D. \idxquad{22}
\item Merz~T. \idxquad{11}
\item Meyer~C.A. \idxquad{35}
\item Meyer~H. \idxquad{32}
\item Meyer~J. \idxquad{11}
\item Mikocki~S. \idxquad{6,26}
\item Milone~V. \idxquad{31}
\item Moreau~F. \idxquad{27}
\item Moreels~J. \idxquad{4}
\item Morris~J.V. \idxquad{5}
\item M\"uller~K. \idxquad{35}
\item Murray~S.A. \idxquad{22}
\indexspace
\item Nagovizin~V. \idxquad{23}
\item Naroska~B. \idxquad{13}
\item Naumann~Th. \idxquad{33}
\item Newton~D. \idxquad{18}
\item Neyret~D. \idxquad{28}
\item Nguyen~A. \idxquad{28}
\item Niebergall~F. \idxquad{13}
\item Nisius~R. \idxquad{1}
\item Nowak~G. \idxquad{6}
\item Nyberg~M. \idxquad{21}
\indexspace
\item Oberlack~H. \idxquad{25}
\item Obrock~U. \idxquad{8}
\item Olsson~J.E. \idxquad{11}
\item Ould-Saada~F. \idxquad{13}
\indexspace
\item Pascaud~C. \idxquad{26}
\item Patel~G.D. \idxquad{19}
\item Peppel~E. \idxquad{11}
\item Phillips~H.T. \idxquad{3}
\item Phillips~J.P. \idxquad{22}
\item Pichler~Ch. \idxquad{12}
\item Pilgram~W. \idxquad{2}
\item Pitzl~D. \idxquad{34}
\item Prell~S. \idxquad{11}
\item Prosi~R. \idxquad{11}
\indexspace
\item R\"adel~G. \idxquad{11}
\item Raupach~F. \idxquad{1}
\item Rauschnabel~K. \idxquad{8}
\item Reinshagen~S. \idxquad{11}
\item Ribarics~P. \idxquad{25}
\item Riech~V. \idxquad{12}
\item Riedlberger~J. \idxquad{34}
\item Rietz~M. \idxquad{2}
\item Robertson~S.M. \idxquad{3}
\item Robmann~P. \idxquad{35}
\item Roosen~R. \idxquad{4}
\item Royon~C. \idxquad{9}
\item Rudowicz~M. \idxquad{25}
\item Rusakov~S. \idxquad{24}
\item Rybicki~K. \idxquad{6}
\indexspace
\item Sahlmann~N. \idxquad{2}
\item Sanchez~E. \idxquad{25}
\item Savitsky~M. \idxquad{11}
\item Schacht~P. \idxquad{25}
\item Schleper~P. \idxquad{14}
\item von Schlippe~W. \idxquad{20}
\item Schmidt~D. \idxquad{32}
\item Schmitz~W. \idxquad{2}
\item Sch\"oning~A. \idxquad{11}
\item Schr\"oder~V. \idxquad{11}
\item Schulz~M. \idxquad{11}
\item Schwab~B. \idxquad{14}
\item Schwind~A. \idxquad{33}
\item Seehausen~U. \idxquad{13}
\item Sell~R. \idxquad{11}
\item Semenov~A. \idxquad{23}
\item Shekelyan~V. \idxquad{23}
\item Shooshtari~H. \idxquad{25}
\item Shtarkov~L.N. \idxquad{24}
\item Siegmon~G. \idxquad{16}
\item Siewert~U. \idxquad{16}
\item Skillicorn~I.O. \idxquad{10}
\item Smirnov~P. \idxquad{24}
\item Smith~J.R. \idxquad{7}
\item Smolik~L. \idxquad{11}
\item Spitzer~H. \idxquad{13}
\item Staroba~P. \idxquad{29}
\item Steenbock~M. \idxquad{13}
\item Steffen~P. \idxquad{11}
\item Stella~B. \idxquad{31}
\item Stephens~K. \idxquad{22}
\item St\"osslein~U. \idxquad{33}
\item Strachota~J. \idxquad{11}
\item Straumann~U. \idxquad{35}
\item Struczinski~W. \idxquad{2}
\indexspace
\item Taylor~R.E. \idxquad{36,26}
\item Tchernyshov~V. \idxquad{23}
\item Thiebaux~C. \idxquad{27}
\item Thompson~G. \idxquad{20}
\item Tru\"ol~P. \idxquad{35}
\item Turnau~J. \idxquad{6}
\indexspace
\item Urban~L. \idxquad{25}
\item Usik~A. \idxquad{24}
\indexspace
\item Valkarova~A. \idxquad{30}
\item Vall\'ee~C. \idxquad{28}
\item Van Esch~P. \idxquad{4}
\item Vartapetian~A. \idxquad{11}
\item Vazdik~Y. \idxquad{24}
\item Verrecchia~P. \idxquad{9}
\item Vick~R. \idxquad{13}
\item Vogel~E. \idxquad{1}
\indexspace
\item Wacker~K. \idxquad{8}
\item Walther~A. \idxquad{8}
\item Weber~G. \idxquad{13}
\item Wegner~A. \idxquad{11}
\item Wellisch~H.P. \idxquad{25}
\item West~L.R. \idxquad{3}
\item Willard~S. \idxquad{7}
\item Winde~M. \idxquad{33}
\item Winter~G.-G. \idxquad{11}
\item Wolff~Th. \idxquad{34}
\item Wright~A.E. \idxquad{22}
\item Wulff~N. \idxquad{11}
\indexspace
\item Yiou~T.P. \idxquad{28}
\indexspace
\item \v{Z}\'a\v{c}ek~J. \idxquad{30}
\item Zeitnitz~C. \idxquad{12}
\item Ziaeepour~H. \idxquad{26}
\item Zimmer~M. \idxquad{11}
\item Zimmermann~W. \idxquad{11}
\end{theindex}

Binary file not shown.

View File

@ -0,0 +1,42 @@
% This is LLNCSDOC.STY the modification of the
% LLNCS class file for the documentation of
% the class itself.
%
\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}%
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}
\def\AmSTeX{{\protect\AmS-\protect\TeX}}
%
\def\ps@myheadings{\let\@mkboth\@gobbletwo
\def\@oddhead{\hbox{}\hfil\small\rm\rightmark
\qquad\thepage}%
\def\@oddfoot{}\def\@evenhead{\small\rm\thepage\qquad
\leftmark\hfil}%
\def\@evenfoot{}\def\sectionmark##1{}\def\subsectionmark##1{}}
\ps@myheadings
%
\setcounter{tocdepth}{2}
%
\renewcommand{\labelitemi}{--}
\newenvironment{alpherate}%
{\renewcommand{\labelenumi}{\alph{enumi})}\begin{enumerate}}%
{\end{enumerate}\renewcommand{\labelenumi}{enumi}}
%
\def\bibauthoryear{\begingroup
\def\thebibliography##1{\section*{References}%
\small\list{}{\settowidth\labelwidth{}\leftmargin\parindent
\itemindent=-\parindent
\labelsep=\z@
\usecounter{enumi}}%
\def\newblock{\hskip .11em plus .33em minus -.07em}%
\sloppy
\sfcode`\.=1000\relax}%
\def\@cite##1{##1}%
\def\@lbibitem[##1]##2{\item[]\if@filesw
{\def\protect####1{\string ####1\space}\immediate
\write\@auxout{\string\bibcite{##2}{##1}}}\fi\ignorespaces}%
\begin{thebibliography}{}
\bibitem[1982]{clar:eke3} Clarke, F., Ekeland, I.: Nonlinear
oscillations and boundary-value problems for Hamiltonian systems.
Arch. Rat. Mech. Anal. 78, 315--333 (1982)
\end{thebibliography}
\endgroup}

View File

@ -0,0 +1,34 @@
Dear LLNCS user,
The files in this directory belong to the LaTeX2e package for
Lecture Notes in Computer Science (LNCS) of Springer-Verlag.
It consists of the following files:
readme.txt this file
history.txt the version history of the package
llncs.cls the LaTeX2e document class
llncs.dem the sample input file
llncs.doc the documentation of the class (LaTeX source)
llncsdoc.pdf the documentation of the class (PDF version)
llncsdoc.sty the modification of the class for the documentation
llncs.ind an external (faked) author index file
subjidx.ind subject index demo from the Springer book package
llncs.dvi the resulting DVI file (remember to use binary transfer!)
sprmindx.sty supplementary style file for MakeIndex
(usage: makeindex -s sprmindx.sty <yourfile.idx>)
splncs.bst old BibTeX style for use with llncs.cls
splncs_srt.bst ditto with alphabetic sorting
splncs03.bst current LNCS BibTeX style with alphabetic sorting
aliascnt.sty part of the Oberdiek bundle; allows more control over
the counters associated to any numbered item
remreset.sty by David Carlisle

View File

@ -0,0 +1,39 @@
% remreset package
%%%%%%%%%%%%%%%%%%
% Copyright 1997 David carlisle
% This file may be distributed under the terms of the LPPL.
% See 00readme.txt for details.
% 1997/09/28 David Carlisle
% LaTeX includes a command \@addtoreset that is used to declare that
% a counter should be reset every time a second counter is incremented.
% For example the book class has a line
% \@addtoreset{footnote}{chapter}
% So that the footnote counter is reset each chapter.
% If you wish to bas a new class on book, but without this counter
% being reset, then standard LaTeX gives no simple mechanism to do
% this.
% This package defines |\@removefromreset| which just undoes the effect
% of \@addtorest. So for example a class file may be defined by
% \LoadClass{book}
% \@removefromreset{footnote}{chapter}
\def\@removefromreset#1#2{{%
\expandafter\let\csname c@#1\endcsname\@removefromreset
\def\@elt##1{%
\expandafter\ifx\csname c@##1\endcsname\@removefromreset
\else
\noexpand\@elt{##1}%
\fi}%
\expandafter\xdef\csname cl@#2\endcsname{%
\csname cl@#2\endcsname}}}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,4 @@
delim_0 "\\idxquad "
delim_1 "\\idxquad "
delim_2 "\\idxquad "
delim_n ",\\,"

View File

@ -0,0 +1,70 @@
% clmomu01.ind
%-----------------------------------------------------------------------
% CLMoMu01 1.0: LaTeX style files for books
% Sample index file for User's guide
% (c) Springer-Verlag HD
%-----------------------------------------------------------------------
\begin{theindex}
\item Absorption\idxquad 327
\item Absorption of radiation \idxquad 289--292,\, 299,\,300
\item Actinides \idxquad 244
\item Aharonov-Bohm effect\idxquad 142--146
\item Angular momentum\idxquad 101--112
\subitem algebraic treatment\idxquad 391--396
\item Angular momentum addition\idxquad 185--193
\item Angular momentum commutation relations\idxquad 101
\item Angular momentum quantization\idxquad 9--10,\,104--106
\item Angular momentum states\idxquad 107,\,321,\,391--396
\item Antiquark\idxquad 83
\item $\alpha$-rays\idxquad 101--103
\item Atomic theory\idxquad 8--10,\,219--249,\,327
\item Average value\newline ({\it see also\/} Expectation value)
15--16,\,25,\,34,\,37,\,357
\indexspace
\item Baker-Hausdorff formula\idxquad 23
\item Balmer formula\idxquad 8
\item Balmer series\idxquad 125
\item Baryon\idxquad 220,\,224
\item Basis\idxquad 98
\item Basis system\idxquad 164,\,376
\item Bell inequality\idxquad 379--381,\,382
\item Bessel functions\idxquad 201,\,313,\,337
\subitem spherical\idxquad 304--306,\, 309,\, 313--314,\,322
\item Bound state\idxquad 73--74,\,78--79,\,116--118,\,202,\, 267,\,
273,\,306,\,348,\,351
\item Boundary conditions\idxquad 59,\, 70
\item Bra\idxquad 159
\item Breit-Wigner formula\idxquad 80,\,84,\,332
\item Brillouin-Wigner perturbation theory\idxquad 203
\indexspace
\item Cathode rays\idxquad 8
\item Causality\idxquad 357--359
\item Center-of-mass frame\idxquad 232,\,274,\,338
\item Central potential\idxquad 113--135,\,303--314
\item Centrifugal potential\idxquad 115--116,\,323
\item Characteristic function\idxquad 33
\item Clebsch-Gordan coefficients\idxquad 191--193
\item Cold emission\idxquad 88
\item Combination principle, Ritz's\idxquad 124
\item Commutation relations\idxquad 27,\,44,\,353,\,391
\item Commutator\idxquad 21--22,\,27,\,44,\,344
\item Compatibility of measurements\idxquad 99
\item Complete orthonormal set\idxquad 31,\,40,\,160,\,360
\item Complete orthonormal system, {\it see}\newline
Complete orthonormal set
\item Complete set of observables, {\it see\/} Complete
set of operators
\indexspace
\item Eigenfunction\idxquad 34,\,46,\,344--346
\subitem radial\idxquad 321
\subsubitem calculation\idxquad 322--324
\item EPR argument\idxquad 377--378
\item Exchange term\idxquad 228,\,231,\,237,\,241,\,268,\,272
\indexspace
\item $f$-sum rule\idxquad 302
\item Fermi energy\idxquad 223
\indexspace
\item H$^+_2$ molecule\idxquad 26
\item Half-life\idxquad 65
\item Holzwarth energies\idxquad 68
\end{theindex}

556
paper/paper.tex 100644
View File

@ -0,0 +1,556 @@
\documentclass{./llncs2e/llncs}
\usepackage{silence}
\WarningFilter{latex}{Text page}
\WarningFilter{caption}{Unsupported}
\WarningFilter{amsmath}{Unable}
\usepackage{todonotes}
\newcommand{\jk}[1]{\todo[inline]{JK: #1}}
%\usepackage{silence}
%\WarningFilter{latex}{Marginpar}
%\WarningFilter{latexfont}{Font shape}
%\WarningFilter{latexfont}{Some font}
%\usepackage{changes}
\usepackage{lmodern}
\usepackage{booktabs}
%\usepackage{caption}
\usepackage{subcaption}
\usepackage{amsmath}
%\usepackage[pass,showframe]{geometry}
\usepackage{array}
\usepackage[hidelinks]{hyperref}
\usepackage{cleveref}
\usepackage{graphicx}
%\usepackage{luacode}
%\usepackage{subfig}
\usepackage{xcolor}
\usepackage{float}
\graphicspath{
{./fig/}
%{/home/joobog/git/bull-io/mpio/ddnime-benchmark/ime_eval/}
%{/home/joobog/git/bull-io/mpio/ddnime-benchmark/ime_eval/info/}
}
%
%\usepackage{makeidx} % allows for indexgeneration
%
\newcolumntype{P}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1\textwidth}}
\begin{document}
%
%\frontmatter % for the preliminaries
%
\pagestyle{headings} % switches on printing of running heads
%\addtocmark{Hamiltonian Mechanics} % additional mark in the TOC
%
%\tableofcontents
%
%\mainmatter % start of the contributions
%
%\title{An MPI-IO In-Memory Driver for Non-Volatile Pooled Memory of the Kove XPD} % -- TODO page limit 18 pages
%\title{Burst Buffer for climate applications}
\title{Benefit of DDN's IME-FUSE for I/O intensive HPC applications.}
%
%\titlerunning{Hamiltonian Mechanics} % abbreviated title (for running head)
% also used for the TOC unless
% \toctitle is used
%
%\author{Double blind}
\author{Eugen Betke\inst{1} \and Julian Kunkel\inst{2}}
%
%\authorrunning{Ivar Ekeland et al.} % abbreviated author list (for running head)
%\institute{Double blind}
%\institute{Deutsches Klimarechenzentrum, Hamburg HH 20146, Germany,\\
%\email{betke@dkrz.de},\\ Home page:
%\texttt{http://dkrz.de}
%\and
%University of Reading, Whiteknights, PO Box 217, Reading, Berkshire, RG6 6AH, United Kingdom,\\
%\email{j.m.kunkel@reading.ac.uk},\\ Home page:
%\texttt{https://www.reading.ac.uk}}
\institute{Deutsches Klimarechenzentrum, Hamburg, Germany,\\
\email{betke@dkrz.de}
\and
University of Reading, Reading, United Kingdom,\\
\email{j.m.kunkel@reading.ac.uk}}
\maketitle % typeset the title of the contribution
\begin{abstract}
Many scientific applications are limited by I/O performance offered by parallel file systems on conventional storage systems.
Flash-based burst buffers provide significant better performance than HDD backed storage, but at the expense of capacity.
Burst buffers are considered as the next step towards achieving wire-speed of interconnect and providing more predictable low latency I/O, which are the holy grail of storage.
A critical evaluation of storage technology is mandatory as there is no long-term experience with performance behavior for particular applications scenarios.
The evaluation enables data centers choosing the right products and system architects the integration in HPC architectures.
This paper investigates the native performance of DDN-IME, a flash-based burst buffer solution.
Then, it takes a closer look at the IME-FUSE file systems, which uses IMEs as burst buffer and a Lustre file system as back-end.
%\jk{Aber nicht nur IME-FUSE, oder? Hinten sind ja andere auch untersucht.}
Finally, by utilizing a NetCDF benchmark, it estimates the performance benefit for climate applications.
\end{abstract}
\keywords{Lustre, FUSE, evaluation, flash-based storage}
\section{Introduction}
%Since we understand that, the architecture of the next generation HPC shall consider that fact.
The dilemma of conventional high-performance storage systems based on HDDs is that they must maximize the throughput to reduce application run times and at the same time they shall minimize the provided bandwidth to reduce costs.
The first requirement is often prioritized to the detriment of the second one, which typically ends up in the oversizing and in a low average usage of the bandwidth procured.
The prioritization is motivated by the requirement to process large performance peaks particular due to checkpoint/restart workloads, that often occur in large-scale applications.
However, since these systems are optimized for sequential I/O, data-intense workloads that are not following this pattern are unable to saturate the network -- reducing the effective utilization.
% Hardware solutions
Traditional parallel file systems can be deployed on flash-based storage instead of HDDs, increasing performance for random workloads.
A nice work in this direction was done in \cite{heben2014lfsperf}.
Typically, data is accessed via POSIX interfaces but can be accessed using MPI-IO~\cite{Thakur:1999:IMP:301816.301826}.
MPI-IO is a widely accepted middleware layer for parallel I/O that relaxes the POSIX semantics and is designed for parallel I/O.
In an alternative storage architecture, a burst buffer~\cite{Liu12onthe,romanus2015challenges} is placed between compute nodes and the storage.
Acting as an intermediate storage tier, it's goal is to catch the I/O peaks from the compute nodes.
Therefore, it provides a low latency and high bandwidth to the compute nodes, but also utilizes the back-end storage by streaming data constantly at a lower bandwidth.
%It allows the large-scale applications to share files efficiently, facilitating the programming efforts and is a part of many libraries and used by a wide range of applications.
In-memory systems, like the Kove\textsuperscript{\textregistered} XPD\textsuperscript{\textregistered}~\cite{kove:xpd:l2}, provide byte-addressable storage with better latency, endurance and availability as flash chips.
Flash-based systems, like DDN IME \cite{ddnime2015}, are also byte-addressable, but have different characteristics than an in-memory storage, for example, flash offers a better costs per gigabyte ratio.
%The address space of burst-buffer can be used to deploy a parallel file system, but performance would be limited by the POSIX semantics.
%For large data this solution may be not suitable due space limitation.
%In contrast, the relaxed MPI-IO semantics enables a lock-free access.
Accessing a fast storage over a POSIX compliant file system or MPI-IO interface is an interesting option for many users, because neither changes in source code, nor software recompilation is required as long as it doesn't degrade the performance too much.
Closed source and pre-compiled applications could also benefit from that.
For that purpose, DDN developed a fuse module (IME-FUSE) which uses IME as a burst buffer and stores data on a parallel file system.
In this evaluation we used Lustre as back-end.
%\jk{Lustre is now what?}
Our \textbf{contributions} are: 1) we investigate peak performance of IME-native and IME-FUSE, and compare it to Lustre,
2) we estimate the performance behaviour for HPC applications, that access data using NetCDF library.
This paper is structured as follows:
\Cref{sec:relatedWork} discusses related work, then Section 3 describes the test environment.
Section 4 and 5 show the test setup and performance results.
Finally, the paper is summarized in Section 6.
\section{Related Work}
\label{sec:relatedWork}
% Burst buffer hardware
Relevant state-of-the-art can be grouped into performance optimization, burst buffers to speedup I/O and in-memory storage solutions.
Optimization and tuning of file systems and I/O libraries is traditionally an important but daunting task as many configuration knobs can be considered in parallel file system servers, clients and the I/O middleware.
Without tuning, typical workloads stay behind the peak-performance by orders of magnitude.
With considerable tuning effort a well fitting problem can yield good results: \cite{HDF5Intro} reports 50\% peak performance with a single 291~TB file.
In \cite{howison2012tuning} MPI-IO and HDF5 were optimized and adapted to each other, improving write throughput by 1.4x to 33x.
Many existing workloads can take benefit of a burst buffer as a fast write-behind cache that transparently migrates data from the fast storage to traditional parallel file system.
Burst buffers typically rely on flash or NVRAM to support random I/O workloads.
For flash based SSDs, many vendors offer high-performance storage solutions, for example, DDN Infinite Memory Engine (IME)~\cite{DDNIME}, IBM FlashSystem~\cite{IBMFlash} and Cray's DataWarp accelerator~\cite{CrayDataWarp}.
Using comprehensive strategies to utilize flash chips concurrently, these solutions are powerful and robust to guarantee availability and durability of data for many years.
The integration of Cray DataWarp burst buffer into the NERSC HPC architecture \cite{pdswDataWarp} increased the I/O performance of Chumbo-Crunch simulator by 2.84x to 5.73x, compared to Lustre.
However, for the sake of efficient burst buffer usage, the serial simulator workflow had to be split into single stages (i.e., simulation, visualization, movie encoding), which then were executed in parallel.
The research group at JSC uses DDN IME burst buffer \cite{Schenck2016} and GPFS to identify requirements for the next HPC generation.
The main purpose is to accelerate the I/O performance of the NEST (``NEural Simulation Tool``).
The preliminary IOR experiments show, that I/O performance can be increased upto 20x.
BurstFS \cite{Wang:2016:EBF:3014904.3014997} uses local NVRAM of compute nodes, instead of dedicated remote machines.
An elaborated communication scheme interconnects the distributed NVRAM and provides a contiguous storage space.
This storage is allocated at beginning and exists for the lifetime of the job.
In the experiments, BurstFS outperforms OrangeFS and PLFS by several times.
%In \cite{sato2014user}, a user-level InfiniBand-based file system is designed as intermediate layer between compute nodes and parallel file system.
%With SSDs and FDR Infiniband, they achieve on one server a throughput of 2~GB/s and 3~GB/s for write and read, respectively.
The usage of DRAM for storing intermediate data is not new and RAM drives have been used in MSDOS and Linux (with tmpfs) for decades.
However, offered RAM storage was used as temporary local storage and not durable and usually not accessible from remote nodes.
Exporting tmpfs storage via parallel file systems has been used mainly for performance evaluation but without durability guarantees.
Wickberg and Carothers introduced the RAMDISK Storage Accelerator~\cite{wickberg2012ramdisk} for HPC applications that flushes data to a back-end.
It consists of a set of dedicated nodes that offer in-memory scratch space.
Jobs can use the storage to pre-fetch input data prior job execution or as write-behind cache to speedup I/O.
A prototype with a PVFS-based RAMDISK improved performance of 2048 processes compared to GPFS (100~MB/s vs. 36~MB/s for writes).
Burst-mem~\cite{wang2014burstmem} provides a burst buffer with write-behind capabilities by extending Memcached~\cite{jose2011memcached}.
Experiments show that the ingress performance grows up to 100~GB/s with 128 BurstMem servers.
%An extension of the work discusses resilience on server failures with minor performance reductions~\cite{wang2015development}.
In the field of big data, in-memory data management and processing has become popular with Spark~\cite{zaharia2012resilient}.
Now there are many software packages providing storage management and compute engines~\cite{zhang2015memory}.
%By using such tools, various application workloads have been accelerated significantly.
The Kove XPD~\cite{kove:xpd:l2} is a robust scale-out pooled memory solution that allows aggregating multiple Infiniband links and devices into one big virtual address space that can be dynamically partitioned.
Internally, the Kove provides persistence by periodically flushing memory with a SATA RAID.
Due to the performance differences, the process comes with a delay, but the solution is connected to a UPS to ensure that data becomes durable in case of a power outage.
While providing many interfaces, the XPD does not offer a shared storage that can be utilized from multiple nodes concurrently.
\section{Test environment}
%\subsection{DDN cluster}
DDN provided access to their test cluster in D\"usseldorf on which 10 nodes could be used for testing.
Each node is equipped with two Sandy Bridge processors (8 cores, E5-2650v2 @2.60GHz) and 64~GB RAM.
They are interconnected with a Mellanox Connect-X-4 card providing 100 Gb/sec (4x EDR).
As storage, a DDN ES14K (Exascale 3.1) with two metadata servers and Lustre 2.7.19.12 is provided; additionally, an IME system consisting of 4 servers is provided.
The flash native data cache of IME acts as a burst buffer and is drained to the Lustre system, the performance reported with IOR is 85~GB/s in write mode.
The DDN IME provides byte-addressable flash-based storage space with high performance characteristics.
It can be addressed directly (IME-native) in a fast and efficient way, but DDN also provides a number of convenient solutions, that require less integration effort.
(1) The applications can be re-linked to the MPI-IO implementation with IME support, which was developed by DDN.
(2) Then, DDN provides a fuse module (IME-FUSE) with IME support, which are convenient ways to access a shared storage.
Both file systems are POSIX compliant and can be used by the applications without any source code modification, recompilation, or re-linking.
%\jk{Was ist patched Lustre.}
In the conducted tests, IME is used via its FUSE mount and backed by the DDN Lustre.
We assume during the write experiment, data is kept inside the burst buffer and not written back, albeit we cannot ensure this.
\begin{figure}[t]
\centering
\includegraphics[width=.7\linewidth]{system3}
\caption{DDN test cluster}
\label{fig:testsytem}
\end{figure}
The DDN cluster is a experimental system with a lightweight software setup.
Especially, the exclusive access to the IME was not guaranteed, so that some results could be affected by other users.
Therefore, we don't draw conclusions from outliers, since we don't know the origin of them.
\subsection{Benchmarks}
As our primary benchmark, IOR~\cite{loewe2012ior} is used varying access granularity, processes-per-node, nodes and access pattern (random and sequential).
The official version of IOR allows us to measure the real performance without considering open/close times (see \Cref{eq:ior_official}).
To synchronize the measurements and capture time for open, close and I/O separately, inter-phase barriers are turned on (IOR option -g).
The DDN version (IME-IOR) supports IME-native interface, but doesn't allow measuring real I/O performance.
Therefore, the performance values include open/close times (see \Cref{eq:ior_ddn}).
\begin{equation}\label{eq:ior_official}
\text{perf}_\text{Lustre, IME-FUSE} = \frac{\text{filesize}}{t_{\text{io}}}
\end{equation}
\begin{equation}\label{eq:ior_ddn}
\text{perf}_\text{IME-native} = \frac{\text{filesize}}{t_{\text{total}}} = \frac{\text{filesize}}{t_{\text{open}} + t_{\text{io}} + t_{\text{close}}}
\end{equation}
Since the IOR benchmarks does not support NetCDF, and HDF5 is only supported with limited configuration of the pattern,
additionally, the NetCDF-Bench has been used\footnote{\url{https://github.com/joobog/netcdf-bench}}.
This benchmark uses the parallel NetCDF interface to read/write patterns on a 4D dataset into a NetCDF4/HDF5 file.
It decomposes a domain geometry of ($t$,$x$,$y$,$z$), e.g., ($100$,$16$,$64$,$4$) across the processes of an MPI parallel program.
The processes partition the geometry in x and y direction and one time step is accessed per iteration of each parallel process.
Various options to control the optimizations and data mappings from NetCDF are exported by the benchmark (chunking vs. fixed layout, unbound dimensions, chunk size, pre-filling).
Finally, to measure performance of individual operations to investigate variability, the sequential benchmark \texttt{io-modelling} is used\footnote{\url{https://github.com/JulianKunkel/io-modelling}}.
It uses a high-precision timer and supports various access patterns on top of the POSIX interface.
\section{Experiment Configuration}
%\subsection{Software}
On the DDN cluster, we use NetCDF-Bench, IOR, and IME-IOR to measure the IME's throughput, and use \texttt{io-modelling} for testing variability.
Each test configuration is repeated 10 times.
All experiments are conducted with block sizes 16, 100, 1024, and 10240~KiB.
To find the performance limits of the test system we use the IOR benchmarks.
For that purpose, we conduct a series of experiments with various parameters, where we measure the performance for \{read, write\} $\times$ \{random, sequential\} $\times$ \{POSIX, MPIIO\} $\times$ \{Lustre, IME-FUSE, IME-native\} $\times$ \{collective, independent\}.
The stripe count on Lustre is twice as large as the number of nodes.
The purpose of NetCDF-Bench is to investigate the I/O behaviour of typical scientific application, that access large variable through NetCDF4.
In the experiment, we varied the following parameters: \{Lustre, IME-FUSE\} $\times$ \{read, write\} $\times$ \{chunked, contiguous\} $\times$ \{collective, independent\}.
With \texttt{io-modelling} benchmark we looked at the variability of individual I/O accesses \{Lustre, IME-FUSE\} $\times$ \{read, write\} $\times$ \{random, sequential\}.
\subsection{Open/close times}
The time of open/close reduces the reported performance of IME.
They are dropped whenever possible for two reason.
Firstly, in our experiments the test file size is variable ($\text{filesize} = 100 \cdot \text{blocksize} \cdot \text{NN} \cdot \text{PPN}$), it affects small experiments more than the larger ones.
Additionally, it should be noted, that for production runs, larger files and capacities are assumed, reducing this overhead.
Unless otherwise stated, the performance reported in this paper was measured without open/close times.
The goal of our evaluation is to systematically investigate the scaling behavior of the DDN IME's, IME-FUSE and Lustre.
In the following experiments we use 1-10 client nodes (NN) and 1-8 processes per node (PPN) to push hardware to the limits.
On each compute node only one CPU is used, that is connected directly to the Infiniband adapter, to avoid the QPI overhead.
To provide reliable results, each experiment was repeated 10 times.
%\jk{Wie oft wurden die Experimente wiederholt steht nirgendwo?}
\subsection{Performance}
%\subsubsection{Peak performance}
\Cref{tab:bestperf_nn10} shows the best and the average performance values that were observed with IME-IOR during the test runs on a single node and on 10 nodes for random and sequential I/O.
Based on average performance for random I/O with NN=1 and PPN=8, 10 client nodes can achieve a throughput of 61~GB/s and of 80~GB/s for write and read, respectively.
As \Cref{tab:bestperf_nn10} shows, the measured write performance is similar to expected values, which indicates that the compute nodes are the bottlenecks.
But the measured read performance is significantly lower than expected.
This indicates, that the bottleneck here are the IMEs.
The same considerations apply to sequential performance.
\begin{figure}[b!]
\centering
\includegraphics[width=0.8\textwidth]{performance_overview_rnd_ime.png}
\caption{Random access performance depending on blocksize and PPN}
\label{fig:read_write_ime}
\end{figure}
\section{Evaluation}
\begin{table}[b!]
\centering
\begin{tabular}{r|r|r|r|r|r|l|r}
& & \multicolumn{2}{c|}{Best} & \multicolumn{2}{c|}{Mean} & & \\
NN & PPN & \multicolumn{2}{c|}{Performance} & \multicolumn{2}{c|}{Performance} & I/O type & File size \\
& & \multicolumn{2}{c|}{in [MiB/s]} & \multicolumn{2}{c|}{in [MiB/s]} & & in [MiB] \\
& & read & write & read & write & & \\
\hline
1 &1 &2,560 &1,240 &2,400 &1,180 & rnd & 1000 \\
1 &1 &2,290 &1,230 &2,000 &870 & seq & 1000 \\
\hline
1 &8 &8,500 &6,390 &8,100 &6,120 & rnd & 8000 \\
1 &8 &8,700 &6,380 &7,100 &4,530 & seq & 8000 \\
\hline
10 &1 &22,300 &10,700 &21,200 &10,000 & rnd & 10000 \\
10 &1 &23,200 &10,800 &22,200 &8,430 & seq & 10000 \\
\hline
10 &8 &67,500 &60,200 &65,300 &58,400 & rnd & 80000 \\
10 &8 &67,500 &62,900 &61,700 &54,300 & seq & 80000 \\
\end{tabular}
\caption{The best and mean performance measured with IME-IOR (blocksize: 10MiB) (NN: number of nodes; PPN: processes per node).}
\label{tab:bestperf_nn10}
\end{table}
%\subsubsection{IME}
\textbf{IME-native (\Cref{fig:read_write_ime,fig:overview_ime}):}
Characteristic for IME-native is that for each block size, there is a linear dependency between read and write accesses.
The performance behavior for each block size can be approximated by a linear function and that small block sizes tend to have better write behaviour.
The complete set of performance results for random I/O is shown in \Cref{fig:overview_ime}.
Firstly, it confirms the linear scalability.
Secondly, there is also no regression of the curves, probably because the experiment setup couldn't push the IMEs to the limits.
Further observations are:
1) writing small blocks is more efficient than reading small blocks; reading large blocks is more efficient that writing large blocks,
%(2) the best configuration is not able to achieve the wire speed of the interconnect,
%(2) for large block sizes, a high percentage of peak is achieved quickly,
2) performance increases with increasing access granularity.
3) with 1 or 4 PPN the available network bandwidth is not utilized.
With PPN=8, we are close to the available network bandwidth for 1 and 10 MiB accesses.
Hence, the I/O path involves relevant latencies.
%\def\hperf{0.45\textheight}
\def\hperf{0.45\textheight}
%\jk{Im Bild braucht es als TEXT PPN, versteht man sonst nicht}
\begin{figure}[t]
\centering
\includegraphics[height=\hperf]{performance_ior_ime_ind_CHUNK:notset_FILL:notset_LIM:notset_legend:yes_size:6x8.png}
\caption{IME-native random I/O performance (lines go through max. values)}
\label{fig:overview_ime}
\end{figure}
%+ sqlite3 results_random.db 'select nn, ppn, t*x*y*z*4/1024/1024, max(read)/1024 from p where app="ior" and iface="posix" and fs="lustre" and nn!=1 '
%10 8 100.0 17.40234375
%+ sqlite3 results_random.db 'select nn, ppn, t*x*y*z*4/1024/1024, max(write)/1024 from p where app="ior" and iface="posix" and fs="lustre" and nn!=1 '
%4 6 1000.0 11.865234375
%+ sqlite3 results_random.db 'select nn, ppn, t*x*y*z*4/1024/1024, max(read)/1024 from p where app="ior" and iface="mpio" and fs="lustre" and nn!=1 '
%10 8 1000.0 16.853515625
%+ sqlite3 results_random.db 'select nn, ppn, t*x*y*z*4/1024/1024, max(write)/1024 from p where app="ior" and iface="mpio" and fs="lustre" and nn!=1 '
%10 8 100.0 3.763671875
%\subsubsection{Lustre}
\textbf{Lustre (\Cref{fig:overview_lustre}):}
Firstly, a single node can profit from caching, when reading data.
In this case observable performance can rise up to 37~GiB/s (not shown in the figure).
The caching effects disappear for $\text{NN}>1$, hence we ignore them in further discussion.
\begin{figure}[p!]
\centering
\begin{subfigure}{\textwidth}
\subcaption{Lustre}
\includegraphics[height=\hperf]{performance_ior_lustre_ind_CHUNK:notset_FILL:notset_LIM:notset_legend:yes_size:12x8.png}
\label{fig:overview_lustre}
\end{subfigure}
\medskip
\begin{subfigure}{\textwidth}
\subcaption{IME-FUSE}
\includegraphics[height=\hperf]{performance_ior_fuse_ind_CHUNK:notset_FILL:notset_LIM:notset_legend:yes_size:12x8.png}
\label{fig:overview_fuse}
\end{subfigure}
\caption{Random I/O performance (lines go through max. values)}
\end{figure}
Secondly, the read performance don't exceed 17.4~GiB/s, and is achieved with NN=10, PPN=8, BS=100~KiB.
This is a contra-intuitive, because usually large block size show better performance.
The best write performance is 11.8~GiB/s, and is achieved with NN=4, PPN=6, BS=1000~KiB.
This measurement and the incrementally flattening curve indicate a poor scalability of Lustre.
Generally speaking, Lustre has a lot of internal overhead, especially to make it POSIX compliant, e.g. distributed lock management.
Thirdly, a particular striking point is the result for MPI-IO write performance.
It is significantly lower than for other configurations.
For this behaviour we have no explanation at the moment.
It is also a confusing result, because it is in contradiction to our later experiment with NetCDF-Bench (\Cref{fig:netcdf_perf}).
NetCDF4 uses MPI-IO as back-end, but achieves better results.
%Although, a comparison is possible to a limited extend only, the benchmarks use different access pattern.
%\subsubsection{IME-FUSE}
\textbf{IME-FUSE (\Cref{fig:overview_fuse}):}
The file system shows a linear scalability, similar to the IME-native, but provides less I/O performance, especially for reading.
This is probably caused by the FUSE overhead, which includes moving I/O requests from user space to kernel space, and then from kernel space to IME-FUSE.
%\subsection{Opening/Closing of Files}
%\label{sec:open-close}
%\begin{figure}[ht]
% \begin{subfigure}{.49\textwidth}
% \centering
% \includegraphics[width=\textwidth]{performance_ior_lustre_mpio_writeopen}
% \subcaption{Lustre}
% \label{fig:open_lustre}
% \end{subfigure}
% \begin{subfigure}{.49\textwidth}
% \centering
% \includegraphics[width=\textwidth]{performance_ior_fuse_mpio_writeopen}
% \subcaption{IME-FUSE}
% \label{fig:open_fuse}
% \end{subfigure}
% \caption{Open times}
% \label{fig:open}
%\end{figure}
%As already mentioned, IME-IOR doesn't provide open/close times.
%Therefore, we skip this investigation and take a look at IME-FUSE and Lustre (\Cref{fig:open}).
%Opening files for writing is one of the most costly metadata operations.
%The small number of nodes doesn't allow to create a reliable model to describe the open/close behaviour.
%The opening times on FUSE are significantly higher than on Lustre.
%The reason is that a metadata request can not be sent directly to the metadata server, but has to go through the whole FUSE stack.
%The time need on FUSE can be higher than 4 seconds whereby on Lustre it is less than 0.20 seconds.
%Due the limited size of the test cluster it is not possible to proof that, but we assume that on larger runs the open times will converge to the same value.
%Because FUSE overhead is constant and metadata server of the file system will be the most significant bottleneck.
\subsection{Application Kernel Using HDF5}
In this experiment, the HDF5 VOL development branch (date 2016-05-09), NetCDF~4.4.1 and NetCDF-bench is used.
%Several values for the 4D data geometry (($100$:$16$:$64$:$4$) $\approx$ 3.1MiB, ($100$:$16$:$64$:$25$) $\approx$ 19.5iB, ($100$:$16$:$64$:$256$) $\approx$ 200MiB, ($100$:$16$:$64$:$2560$) $\approx$ 2000MiB) of raw integer data have been explored.
Several values for the 4D data geometry of raw integer data have been explored.
For each block size we did 100 measurements.
The configuration parameters are summarized in \Cref{tab:netcdf_conf}.
\begin{table}
\centering
\begin{tabular}{l|r|r}
Parameter (-d) & Data size & Block size \\
($t$:$x$:$y$:$z$) & [in GiB] & [in KiB] \\
\hline
($100$:$16$:$64$:$4$) & 0.5 & 16 \\
($100$:$16$:$64$:$25$) & 3.1 & 100 \\
($100$:$16$:$64$:$256$) & 7.8 & 1024 \\
($100$:$16$:$64$:$2560$) & 78.1 & 10240 \\
\end{tabular}
\caption{NetCDF-Bench configuration used in during the benchmark.}
\label{tab:netcdf_conf}
\end{table}
In the experiments, we use 10 client nodes and 8 processes per node to access a shared file.
All experiments were conducted with fixed dimension sizes only, since the unlimited/variable dimensions are not supported in combination with independent I/O in NetCDF4.
\Cref{fig:netcdf_perf} shows the results.
Generally, as expected, independent chunked I/O was a good configuration.
%\subsubsection{Performance}
%+ sqlite3 results_benchtool.db 'select nn, ppn, type, t*x*y*z*4/1024/1024, max(read)/1024 from p where app="benchtool" and type="coll" and fs="lustre" and unlimited="notset" and nn=10 and ppn=8 '
%10 8 coll 1000.0 23.4844971199307
%+ sqlite3 results_benchtool.db 'select nn, ppn, type, t*x*y*z*4/1024/1024, max(write)/1024 from p where app="benchtool" and type="coll" and fs="lustre" and unlimited="notset" and nn=10 and ppn=8 '
%10 8 coll 1000.0 14.0872631923396
%+ sqlite3 results_benchtool.db 'select nn, ppn, type, t*x*y*z*4/1024/1024, max(read)/1024 from p where app="benchtool" and type="ind" and fs="lustre" and unlimited="notset" and nn=10 and ppn=8 '
%10 8 ind 1000.0 40.7447031422526
%+ sqlite3 results_benchtool.db 'select nn, ppn, type, t*x*y*z*4/1024/1024, max(write)/1024 from p where app="benchtool" and type="ind" and fs="lustre" and unlimited="notset" and nn=10 and ppn=8 '
%10 8 ind 1000.0 18.5541272529369
%+ sqlite3 results_benchtool.db 'select nn, ppn, type, t*x*y*z*4/1024/1024, max(read)/1024 from p where app="benchtool" and type="coll" and fs="fuse" and unlimited="notset" and nn=10 and ppn=8 '
%10 8 coll 1000.0 22.9363088999496
%+ sqlite3 results_benchtool.db 'select nn, ppn, type, t*x*y*z*4/1024/1024, max(write)/1024 from p where app="benchtool" and type="coll" and fs="fuse" and unlimited="notset" and nn=10 and ppn=8 '
%10 8 coll 1000.0 14.1118732992937
%+ sqlite3 results_benchtool.db 'select nn, ppn, type, t*x*y*z*4/1024/1024, max(read)/1024 from p where app="benchtool" and type="ind" and fs="fuse" and unlimited="notset" and nn=10 and ppn=8 '
%10 8 ind 1000.0 38.7201201759216
%+ sqlite3 results_benchtool.db 'select nn, ppn, type, t*x*y*z*4/1024/1024, max(write)/1024 from p where app="benchtool" and type="ind" and fs="fuse" and unlimited="notset" and nn=10 and ppn=8 '
%10 8 ind 1000.0 18.6272447947956
\begin{figure}[tb]
\centering
\includegraphics[width=0.9\textwidth]{performance_benchtool_FS:lustre_IFACE:mpio_FILLED_notset_LIM:notset.png}
\caption{NetCDF performance for Lustre (similar to IME-FUSE)}
\label{fig:netcdf_perf}
\end{figure}
\textbf{Lustre vs. IME-FUSE:}
Generally, the performance looks very similar for Lustre and IME-FUSE, that is why we only included the picture for Lustre. There are a few differences:
(1) Collective I/O without chunking causes large variability while reading 16~KiB blocks, (2) and better performance while writing 10~MiB blocks on Lustre.
(3) If chunking is enabled and independent I/O is used, then 10~MiB block sizes can be read with a low variability.
The best performance achieved for collective read is 23~GiB/s write 14~GiB/s, and for independent read 40~GiB/s and write 18~GiB/s.
\textbf{Chunking vs. no chunking:}
Read performance suffers a lot on both file systems, if chunking is enabled for small blocks.
The probability, that several NetCDF processes access the same chunk, increases for small block sizes.
In this case, the processes have to load the whole chunk on each node into memory, even if only a small part of it is required.
Such inefficient access patterns can lead to unnecessary data transfer over the network, i.e. when large parts of the data are pre-loaded, but aren't unused.
This doesn't apply to large block sizes.
Therefore, we can observe performance advantages.
\textbf{Independent I/O vs. collective I/O:}
If chunking is enabled, collective I/O degrades the performance.
If chunking is disabled, it improves I/O for small blocks and degrades I/O of large blocks.
%Unfortunately, the chosen block sizes don't allow to determine the threshold.
\textbf{Caching:}
For large block sizes (10204~KiB) independent chunked read performance outperforms the write performance.
We suppose that cache is responsible for this performance speed-up.
\subsection{Performance variability with individual I/Os.}
This experiment is conducted measuring timing of 10,000 or 1,024 individual I/Os with a single process on IME test cluster on IME-FUSE and Lustre.
\Cref{fig:variability} shows the qualitative difference between the file systems.
The figure shows the density (like a smoothened histogram) of the individually timed I/Os.
\begin{figure}[bpt!]
\centering
\includegraphics[width=\textwidth]{density.png}
\caption{Density of timing individual I/O operations}
\label{fig:variability}
\end{figure}
We observe 1) the read operations on Lustre are faster than using IME-FUSE -- this is presumably due to client-side caching.
2) the random acceleration of IME improves write latencies/throughput for IME.
\section{Conclusion}
IME is a burst buffer solution, that is completely transparent to applications and to users.
These properties make it beneficial for random workloads.
Read performance depends whether data is located on the IME flash or on Lustre.
The data migration policy is usually hidden from the users, so that read behaviour is not known in advanced.
There is an API though to allow users to stage data explicitly.
For large access sizes and processes per node, IME was able to nearly saturate the network.
We did not achieve better performance with IME in all test scenarios, particularly, for the NetCDF benchmark.
The reason for the suboptimal performance gain of IME compared to Lustre may be due to:
1) the access pattern caused by NetCDF4 with HDF5 has a considerable overhead;
2) the Lustre storage from DDN is already well optimized;
3) the small and experimental laboratory setup that we used for testing.
We expect a significant performance gain once more clients access IME.
Further large-scale investigation is necessary.
\section*{Acknowledgment}
{
\small
%Thanks to Jean-Thomas Acquaviva for providing access to the IME test cluster and valuable feedback.
Thanks to DDN for providing access to the IME test cluster and to Jean-Thomas Acquaviva for the support.
%%Thanks to our sponsor William E. Allcock for providing access and feedback.
%%This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.
}
%
% ---- Bibliography ---- BibLatex
%
\bibliographystyle{splncs03}
\bibliography{bibliography}{}
%
% ---- Bibliography ----
%
%\begin{thebibliography}{}
%\end{thebibliography}
%\clearpage
%\addtocmark[2]{Author Index} % additional numbered TOC entry
%\renewcommand{\indexname}{Author Index}
%%\printindex
%\clearpage
%\addtocmark[2]{Subject Index} % additional numbered TOC entry
%\markboth{Subject Index}{Subject Index}
%\renewcommand{\indexname}{Subject Index}
%%\input{subjidx.ind}
\end{document}

View File

@ -0,0 +1,2 @@
let g:Tex_CompileRule_pdf='pdflatex -interaction=nonstopmode --shell-escape --enable--write18 $*'
%let g:Tex_CompileRule_pdf='lualatex -interaction=nonstopmode --shell-escape --enable--write18 $*'

1519
paper/splncs03.bst 100644

File diff suppressed because it is too large Load Diff