Kürzen auf 12 Seiten - Completed.
This commit is contained in:
parent
e697efc049
commit
fb932e70ee
|
@ -212,6 +212,5 @@ journal = {Journal of High Performance Computing},
|
|||
series = {Issue 1},
|
||||
isbn = {},
|
||||
doi = {10.5281/zenodo.4478960},
|
||||
url = {\url{https://jhps.vi4io.org/issues/#1-1}},
|
||||
abstract = {{Every day, supercomputers execute 1000s of jobs with different characteristics. Data centers monitor the behavior of jobs to support the users and improve the infrastructure, for instance, by optimizing jobs or by determining guidelines for the next procurement. The classification of jobs into groups that express similar run-time behavior aids this analysis as it reduces the number of representative jobs to look into. This work utilizes machine learning techniques to cluster and classify parallel jobs based on the similarity in their temporal I/O behavior. Our contribution is the qualitative and quantitative evaluation of different I/O characterizations and similarity measurements and the development of a suitable clustering algorithm. <br><br> In the evaluation, we explore I/O characteristics from monitoring data of one million parallel jobs and cluster them into groups of similar jobs. Therefore, the time series of various I/O statistics is converted into features using different similarity metrics that customize the classification. <br><br> When using general-purpose clustering techniques, suboptimal results are obtained. Additionally, we extract phases of I/O activity from jobs. Finally, we simplify the grouping algorithm in favor of performance. We discuss the impact of these changes on the clustering quality.}}
|
||||
}
|
||||
|
|
|
@ -92,7 +92,7 @@ It is non-trivial to identify jobs with similar behavior from the pool of execut
|
|||
Re-executing the same job will lead to slightly different behavior, a program may be executed with different inputs or using a different configuration (e.g., number of nodes).
|
||||
Job names are defined by users; while a similar name may hint to be a similar workload, finding other applications with the same I/O behavior would not be possible.
|
||||
|
||||
In the paper \cite{Eugen20HPS}, the authors developed several distance measures and algorithms for the clustering of jobs based on the time series and their I/O behavior.
|
||||
In the paper \cite{Eugen20HPS}, we developed several distance measures and algorithms for the clustering of jobs based on the time series and their I/O behavior.
|
||||
These distance measures can be applied to jobs with different runtime and number of nodes utilized but differ in the way they define similarity.
|
||||
They showed that the metrics can be used to cluster jobs, however, it remained unclear if the method can be used by data center staff to explore similar jobs effectively.
|
||||
In this paper, we refine these algorithms slightly, include another algorithm, and apply them to rank jobs based on their temporal similarity to a reference job.
|
||||
|
@ -155,9 +155,9 @@ Therefore, we first need to define how a job's data is represented, then describ
|
|||
On the Mistral supercomputer at DKRZ, the monitoring system \cite{betke20} gathers in ten seconds intervals on all nodes nine I/O metrics for the two Lustre file systems together with general job metadata from the SLURM workload manager.
|
||||
The results are 4D data (time, nodes, metrics, file system) per job.
|
||||
The distance measures should handle jobs of different lengths and node count.
|
||||
In the open access article \cite{Eugen20HPS}\footnote{\url{https://zenodo.org/record/4478960/files/jhps-incubator-06-temporal-29-jan.pdf}}, we discussed a variety of options from 1D job-profiles to data reductions to compare time series data and the general workflow and pre-processing in detail.
|
||||
In the open access article \cite{Eugen20HPS}\footnote{\scriptsize \url{https://zenodo.org/record/4478960/files/jhps-incubator-06-temporal-29-jan.pdf}}, we discussed a variety of options from 1D job-profiles to data reductions to compare time series data and the general workflow and pre-processing in detail.
|
||||
We will be using this representation.
|
||||
In a nutshell, for each job executed on Mistral, they partitioned it into 10 minutes segments\footnote{We found in preliminary experiments that 10 minutes provide sufficient resolution while it reduces noise, i.e., the variation of the statistics when re-running the same job.} and compute the arithmetic mean of each metric, categorize the value into NonIO (0), HighIO (1), and CriticalIO (4) for values below 99-percentile, up to 99.9-percentile, and above, respectively.
|
||||
In a nutshell, for each job executed on Mistral, they partitioned it into 10 minutes segments\footnote{We found in preliminary experiments that 10 minutes reduces noise, i.e., the variation of the statistics when re-running the same job.} and compute the arithmetic mean of each metric, categorize the value into NonIO (0), HighIO (1), and CriticalIO (4) for values below 99-percentile, up to 99.9-percentile, and above, respectively.
|
||||
The values are chosen to be 0, 1, and 4 because we arithmetically derive metrics: naturally the value of 0 will indicate that no I/O issue appears; we weight critical I/O to be 4x as important as high I/O.
|
||||
This strategy ensures that the same approach can be applied to other HPC systems regardless of the actual distribution of these statistics on that data center.
|
||||
After the mean value across nodes is computed for a segment, the resulting numeric value is encoded either using binary (I/O activity on the segment: yes/no) or hexadecimal representation (quantizing the numerical performance value into 0-15) which is then ready for similarity analysis.
|
||||
|
@ -194,7 +194,6 @@ For the inspection of the jobs, a user may explore the job metadata, searching f
|
|||
\label{sec:refjobs}
|
||||
|
||||
For this study, we chose the reference job called Job-M: a typical MPI parallel 8-hour compute job on 128 nodes which write time series data after some spin up. %CHE.ws12
|
||||
|
||||
The segmented timelines of the job are visualized in \Cref{fig:refJobs} -- remember that the mean value is computed across all nodes on which the job ran.
|
||||
This coding is also used for the Q algorithms, thus this representation is what the algorithms will analyze; B algorithms merge all timelines together as described in~\cite{Eugen20HPS}.
|
||||
The figures show the values of active metrics ($\neq 0$); if few are active, then they are shown in one timeline, otherwise, they are rendered individually to provide a better overview.
|
||||
|
@ -220,7 +219,7 @@ Finally, the quantitative behavior of the 100 most similar jobs is investigated.
|
|||
|
||||
\subsection{Performance}
|
||||
|
||||
To measure the performance for computing the similarity to the reference jobs, the algorithms are executed 10 times on a compute node at DKRZ which is equipped with two Intel Xeon E5-2680v3 @2.50GHz and 64GB DDR4 RAM.
|
||||
To measure the performance for computing the similarity to the reference job, the algorithms are executed 10 times on a compute node at DKRZ which is equipped with two Intel Xeon E5-2680v3 @2.50GHz and 64GB DDR4 RAM.
|
||||
A boxplot for the runtimes is shown in \Cref{fig:performance}.
|
||||
The runtime is normalized for 100k jobs, i.e., for B-all it takes about 41\,s to process 100k jobs out of the 500k total jobs that this algorithm will process.
|
||||
Generally, the B algorithms are fastest, while the Q algorithms often take 4-5x as long.
|
||||
|
@ -250,7 +249,7 @@ They could easily be parallelized which would then allow for an online analysis.
|
|||
|
||||
\subsection{Quantitative Analysis}
|
||||
|
||||
In the quantitative analysis, we explore the different algorithms how the similarity of our pool of jobs behaves to our reference jobs.
|
||||
In the quantitative analysis, we explore the different algorithms how the similarity of our pool of jobs behaves to our reference job.
|
||||
The support team in a data center may have time to investigate the most similar jobs.
|
||||
Time for the analysis is typically bound, for instance, the team may analyze the 100 most similar jobs and rank them; we refer to them as the Top\,100 jobs, and \textit{Rank\,i} refers to the job that has the i-th highest similarity to the reference job -- sometimes these values can be rather close together as we see in the histogram in
|
||||
\Cref{fig:hist} for the actual number of jobs with a given similarity.
|
||||
|
@ -324,17 +323,8 @@ Q-phases is able to identify much shorter or longer jobs.
|
|||
To verify the suitability of the similarity metrics, for each algorithm, we carefully investigated the timelines of each of the jobs in the Top\,100.
|
||||
We subjectively found that the approach works very well and identifies suitable similar jobs.
|
||||
To demonstrate this, we include a selection of job timelines and selected interesting job profiles.
|
||||
These can be visually and subjectively compared to our reference job shown in \Cref{fig:refJobs}.
|
||||
For space reasons, the included images will be scaled down making it difficult to read the text.
|
||||
However, we believe that they are still well suited for a visual inspection and comparison.
|
||||
|
||||
Inspecting the Top\,100 is highlighting the differences between the algorithms.
|
||||
All algorithms identify a diverse range of job names for this reference job in the Top\,100.
|
||||
Firstly, the same name of the reference job appears 30 times in the whole dataset.
|
||||
Additional 932 jobs have a slightly modified name.
|
||||
So this job type isn't necessarily executed frequently and, therefore, our Top\,100 is expected to contain other names.
|
||||
All algorithms identify only the reference job but none of the other jobs with the identical name but 1 (KS), 2 (B-* and Q-native) to 3 (Q-lev and Q-phases) jobs with slightly modified names.
|
||||
Some applications are more prominent in these sets, e.g., for B-aggzero, 32~jobs contain WRF (a model) in the name.
|
||||
The number of unique names is 19, 38, 49, and 51 for B-aggzero, Q-phases, Q-native and Q-lev, respectively.
|
||||
|
||||
When inspecting their timelines, the jobs that are similar according to the B algorithms (see \Cref{fig:job-M-bin-aggzero}) subjectively appear to us to be different.
|
||||
|
@ -350,12 +340,12 @@ While jobs exhibit short bursts of other active metrics even for low similarity,
|
|||
\includegraphics[width=\textwidth]{job_similarities_5024292-out/bin_aggzeros-0.7347--14timeseries4498983}
|
||||
\caption{Rank\,15, SIM=73\%}
|
||||
\end{subfigure}
|
||||
\begin{subfigure}{0.47\textwidth}
|
||||
\centering
|
||||
\includegraphics[width=\textwidth]{job_similarities_5024292-out/bin_aggzeros-0.5102--99timeseries5120077}
|
||||
\caption{Rank\,100, SIM=51\% }
|
||||
\end{subfigure}
|
||||
|
||||
%\begin{subfigure}{0.47\textwidth}
|
||||
%\centering
|
||||
%\includegraphics[width=\textwidth]{job_similarities_5024292-out/bin_aggzeros-0.5102--99timeseries5120077}
|
||||
%\caption{Rank\,100, SIM=51\% }
|
||||
%\end{subfigure}
|
||||
\qquad
|
||||
\begin{subfigure}{0.47\textwidth}
|
||||
\centering
|
||||
\includegraphics[width=\textwidth]{job_similarities_5024292-out/bin_aggzeros-0.7755--1timeseries8010306}
|
||||
|
@ -420,54 +410,16 @@ While jobs exhibit short bursts of other active metrics even for low similarity,
|
|||
\label{fig:job-M-hex-native}
|
||||
\end{figure}
|
||||
|
||||
%
|
||||
% \begin{figure}[bt]
|
||||
% \begin{subfigure}{0.3\textwidth}
|
||||
% \centering
|
||||
% \includegraphics[width=\textwidth]{job_similarities_5024292-out/hex_phases-0.8831--1timeseries7826634}
|
||||
% \caption{Rank 2, SIM=88\%}
|
||||
% \end{subfigure}
|
||||
% \begin{subfigure}{0.3\textwidth}
|
||||
% \centering
|
||||
% \includegraphics[width=\textwidth]{job_similarities_5024292-out/hex_phases-0.7963--2timeseries5240733}
|
||||
% \caption{Rank 3, SIM=80\%}
|
||||
% \end{subfigure}
|
||||
% \begin{subfigure}{0.3\textwidth}
|
||||
% \includegraphics[width=\textwidth]{job_similarities_5024292-out/hex_phases-0.4583--14timeseries4244400}
|
||||
% \caption{Rank 15, SIM=46\%}
|
||||
% \end{subfigure}
|
||||
% \begin{subfigure}{0.3\textwidth}
|
||||
% \centering
|
||||
% \includegraphics[width=\textwidth]{job_similarities_5024292-out/hex_phases-0.2397--99timeseries7644009}
|
||||
% \caption{Rank 100, SIM=24\%}
|
||||
% \end{subfigure}
|
||||
%
|
||||
% \caption{Job-M with Q-phases, selection of similar jobs}
|
||||
% \label{fig:job-M-hex-phases}
|
||||
% \end{figure}
|
||||
|
||||
\section{Conclusion}%
|
||||
\label{sec:summary}
|
||||
|
||||
We conducted a study to identify similar jobs based on timelines of nine I/O statistics.
|
||||
The quantitative analysis shows that a diverse set of results can be found and that only a tiny subset of the 500k jobs is very similar to each of the three reference jobs.
|
||||
For the small post-processing job, which is executed many times, all algorithms produce suitable results.
|
||||
For Job-M, the algorithms exhibit a different behavior.
|
||||
Job-L is tricky to analyze, because it is compute-intense with only a single I/O phase at the beginning.
|
||||
We found that the approach to compute similarity of reference jobs to all jobs and ranking these was successful to find related jobs that we were interested in.
|
||||
We introduced a methodology to identify similar jobs based on timelines of nine I/O statistics.
|
||||
The quantitative analysis shows that a diverse set of results can be found and that only a tiny subset of the 500k jobs is very similar to our reference job representing a typical HPC activity.
|
||||
The Q-lev and Q-native work best according to our subjective qualitative analysis.
|
||||
Typically, a related job stems from the same user/group and may have a related job name, but the approach was able to find other jobs as well.
|
||||
The pre-processing of the algorithms and distance metrics differ leading to a different definition of similarity.
|
||||
The data center support/user must define how to define similarity to select the algorithm that suits best.
|
||||
Another consideration could be to identify jobs that are found by all algorithms, i.e., jobs that meet a certain (rank) threshold for different algorithms.
|
||||
That would increase the likelihood that these jobs are very similar and what the user is looking for.
|
||||
|
||||
Our next step is to foster a discussion in the community to identify and define suitable similarity metrics for the different analysis purposes.
|
||||
|
||||
|
||||
|
||||
\subsection*{Acknowledgment} %% Remove this section if not needed
|
||||
\textit{We thank the reviewers for their constructive contributions.}
|
||||
Related jobs stems from the same user/group and may have a related job name, but the approach was able to find other jobs as well.
|
||||
This was a first exploration of this methodology.
|
||||
In the future, we will expand the study comparing more jobs in order to identify the suitability of the methodology.
|
||||
|
||||
\printbibliography%
|
||||
|
||||
|
|
Loading…
Reference in New Issue