mistral-io-datasets/lncs-from-jhps/main.tex

439 lines
28 KiB
TeX
Raw Permalink Normal View History

2021-06-02 12:23:37 +00:00
\documentclass{llncs}
2021-06-02 12:12:37 +00:00
\usepackage{grffile}
\usepackage{amssymb}
\usepackage{mathtools}
\usepackage{booktabs}
\usepackage{multirow}
\usepackage{adjustbox}
\usepackage{footnote}
\usepackage{framed}
2021-06-02 12:23:37 +00:00
\usepackage{cleveref}
\usepackage{listings}
\usepackage{subcaption}
\usepackage[backend=bibtex, style=numeric]{biblatex}
\addbibresource{bibliography.bib}
2021-06-03 20:21:32 +00:00
\usepackage{placeins}
\usepackage{todonotes}
%\usepackage[disable]{todonotes}
\newcommand{\eb}[1]{\todo[inline, color=green]{EB: #1}}
\newcommand{\jk}[1]{\todo[inline]{JK: #1}}
2021-06-02 12:23:37 +00:00
2021-06-02 12:12:37 +00:00
\usepackage{textcomp}
%\usepackage{verbatim}
\usepackage{adjustbox}
\usepackage{enumitem}
\usepackage{pmboxdraw} % to use UTF-8 inside a verbatim/listing environment
% may use if needed
%\usepackage{float}
%\usepackage{array}
%\usepackage{longtable}
%\usepackage{comment}
%\usepackage{pdflscape}
%\usepackage{adjustbox}
%\usepackage{tabularx}
\graphicspath{
{./assets/},
{./assets/fig/}
}
\begin{document}
2021-06-02 13:15:43 +00:00
\title{Toward a Workflow for Identifying Jobs with Similar I/O Behavior Utilizing Time Series Analysis}
2021-06-02 12:23:37 +00:00
\author{Julian Kunkel\inst{1} \and Eugen Betke\inst{2}}
\institute{\vspace*{-1cm}}
\institute{
2021-07-14 10:22:12 +00:00
Georg-August-Universität Göttingen/GWDG
\email{julian.kunkel@gwdg.de}%
2021-06-02 12:23:37 +00:00
\and
ECMWF \email{eugen.betke@ecmwf.int}%
}
2021-06-02 12:12:37 +00:00
\maketitle
\begin{abstract}
One goal of support staff at a data center is to identify inefficient jobs and to improve their efficiency.
Therefore, a data center deploys monitoring systems that capture the behavior of the executed jobs.
While it is easy to utilize statistics to rank jobs based on the utilization of computing, storage, and network, it is tricky to find patterns in 100,000 jobs, i.e., is there a class of jobs that aren't performing well.
Similarly, when support staff investigates a specific job in detail, e.g., because it is inefficient or highly efficient, it is relevant to identify related jobs to such a blueprint.
This allows staff to understand the usage of the exhibited behavior better and to assess the optimization potential.
\medskip
2021-06-02 13:15:43 +00:00
In this article, our goal is to identify jobs similar to an arbitrary reference job.
In particular, we sketch a methodology that utilizes temporal I/O similarity to identify jobs related to the reference job.
Practically, we apply several previously developed time series algorithms.
A study is conducted to explore the effectiveness of the approach by investigating related jobs for a reference job.
The data stem from DKRZ's supercomputer Mistral and include more than 500,000 jobs that have been executed for more than 6 months of operation.
2021-06-03 17:02:24 +00:00
Our analysis shows that the strategy and algorithms bear the potential to identify similar jobs, but more testing is necessary.
2021-06-02 12:12:37 +00:00
\end{abstract}
\section{Introduction}
Supercomputers execute 1000s of jobs every day.
Support staff at a data center have two goals.
Firstly, they provide a service to users to enable them the convenient execution of their applications.
Secondly, they aim to improve the efficiency of all workflows -- represented as batch jobs -- in order to allow the data center to serve more workloads.
In order to optimize a single job, its behavior and resource utilization must be monitored and then assessed.
Rarely, users will liaise with staff and request a performance analysis and optimization explicitly.
Therefore, data centers deploy monitoring systems and staff must pro-actively identify candidates for optimization.
Monitoring and analysis tools such as TACC Stats \cite{evans2014comprehensive}, Grafana \cite{chan2019resource}, and XDMod \cite{simakov2018workload} provide various statistics and time-series data for job execution.
The support staff should focus on workloads for which optimization is beneficial, for instance, the analysis of a job that is executed once on 20 nodes may not be a good return of investment.
By ranking jobs based on their utilization, it is easy to find a job that exhibits extensive usage of computing, network, and I/O resources.
However, would it be beneficial to investigate this workload in detail and potentially optimize it?
For instance, a pattern that is observed in many jobs bears potential as the blueprint for optimizing one job may be applied to other jobs as well.
2021-06-03 17:02:24 +00:00
This is particularly true when running one application with similar inputs, but also different applications may lead to similar behavior.
2021-06-02 12:12:37 +00:00
Knowing details about a problematic or interesting job may be transferred to similar jobs.
Therefore, it is useful for support staff (or a user) that investigates a resource-hungry job to identify similar jobs that are executed on the supercomputer.
It is non-trivial to identify jobs with similar behavior from the pool of executed jobs.
Re-executing the same job will lead to slightly different behavior, a program may be executed with different inputs or using a different configuration (e.g., number of nodes).
Job names are defined by users; while a similar name may hint to be a similar workload, finding other applications with the same I/O behavior would not be possible.
2021-06-02 13:36:29 +00:00
In the paper \cite{Eugen20HPS}, we developed several distance measures and algorithms for the clustering of jobs based on the time series and their I/O behavior.
2021-06-03 17:02:24 +00:00
These distance measures can be applied to jobs with different runtimes and the number of nodes utilized, but differ in the way they define similarity.
2021-06-02 12:12:37 +00:00
They showed that the metrics can be used to cluster jobs, however, it remained unclear if the method can be used by data center staff to explore similar jobs effectively.
In this paper, we refine these algorithms slightly, include another algorithm, and apply them to rank jobs based on their temporal similarity to a reference job.
We start by introducing related work in \Cref{sec:relwork}.
In \Cref{sec:methodology}, we describe briefly the data reduction and the algorithms for similarity analysis.
2021-06-02 13:15:43 +00:00
Then, we perform our study by applying the methodology to a reference job, therewith, providing an indicator for the effectiveness of the approach to identify similar jobs.
In \Cref{sec:evaluation}, the reference job is introduced and quantitative analysis of the job pool is made based on job similarity.
2021-06-02 12:12:37 +00:00
In \Cref{sec:timelines}, the 100 most similar jobs are investigated in more detail, and selected timelines are presented.
The paper is concluded in \Cref{sec:summary}.
\section{Related Work}
\label{sec:relwork}
Related work can be classified into distance measures, analysis of HPC application performance, inter-comparison of jobs in HPC, and I/O-specific tools.
%% DISTANCE MEASURES
The ranking of similar jobs performed in this article is related to clustering strategies.
Levenshtein (Edit) distance is a widely used distance metric indicating the number of edits needed to convert one string to another \cite{navarro2001guided}.
The comparison of the time series using various metrics has been extensively investigated.
In \cite{khotanlou2018empirical}, an empirical comparison of distance measures for the clustering of multivariate time series is performed.
14 similarity measures are applied to 23 data sets.
It shows that no similarity measure produces statistically significant better results than another.
However, the Swale scoring model \cite{morse2007efficient} produced the most disjoint clusters.
%In this model, gaps imply a cost.
% Lock-Step Measures and Elastic Measures
% Analysis of HPC application performance
The performance of applications can be analyzed using one of many tracing tools such as Vampir \cite{weber2017visual} that record the behavior of an application explicitly or implicitly by collecting information about the resource usage with a monitoring system.
Monitoring systems that record statistics about hardware usage are widely deployed in data centers to record system utilization by applications.
There are various tools for analyzing the I/O behavior of an application \cite{TFAPIKBBCF19}.
% time series analysis for inter-comparison of processes or jobs in HPC
For Vampir, a popular tool for trace file analysis, in \cite{weber2017visual} the Comparison View is introduced that allows them to manually compare traces of application runs, e.g., to compare optimized with original code.
Vampir generally supports the clustering of process timelines of a single job, allowing to focus on relevant code sections and processes when investigating many processes.
2021-06-02 13:15:43 +00:00
%Chameleon \cite{bahmani2018chameleon} extends ScalaTrace for recording MPI traces but reduces the overhead by clustering processes and collecting information from one representative of each cluster.
%For the clustering, a signature is created for each process that includes the call-graph.
2021-06-03 17:02:24 +00:00
In \cite{halawa2020unsupervised}, 11 performance metrics including CPU and network are utilized for agglomerative clustering of jobs, showing the general effectiveness of the approach.
2021-06-02 12:12:37 +00:00
In \cite{rodrigo2018towards}, a characterization of the NERSC workload is performed based on job scheduler information (profiles).
Profiles that include the MPI activities have shown effective to identify the code that is executed \cite{demasi2013identifying}.
Many approaches for clustering applications operate on profiles for compute, network, and I/O \cite{emeras2015evalix,liu2020characterization,bang2020hpc}.
For example, Evalix \cite{emeras2015evalix} monitors system statistics (from proc) in 1-minute intervals but for the analysis, they are converted to a profile removing the time dimension, i.e., compute the average CPU, memory, and I/O over the job runtime.
% I/O-specific tools
PAS2P \cite{mendez2012new} extracts the I/O patterns from application traces and then allows users to manually compare them.
In \cite{white2018automatic}, a heuristic classifier is developed that analyzes the I/O read/write throughput time series to extract the periodicity of the jobs -- similar to Fourier analysis.
The LASSi tool \cite{AOPIUOTUNS19} periodically monitors Lustre I/O statistics and computes a "risk" factor to identify I/O patterns that stress the file system.
In contrast to existing work, our approach allows a user to identify similar activities based on the temporal I/O behavior recorded by a data center-wide deployed monitoring system.
\section{Methodology}
\label{sec:methodology}
The purpose of the methodology is to allow users and support staff to explore all executed jobs on a supercomputer in order of their similarity to the reference job.
Therefore, we first need to define how a job's data is represented, then describe the algorithms used to compute the similarity, and, the methodology to investigate jobs.
\subsection{Job Data}
On the Mistral supercomputer at DKRZ, the monitoring system \cite{betke20} gathers in ten seconds intervals on all nodes nine I/O metrics for the two Lustre file systems together with general job metadata from the SLURM workload manager.
The results are 4D data (time, nodes, metrics, file system) per job.
The distance measures should handle jobs of different lengths and node count.
2021-06-04 07:37:40 +00:00
In the open-access article \cite{Eugen20HPS}, we discussed a variety of options from 1D job-profiles to data reductions to compare time series data and the general workflow and pre-processing in detail.
2021-06-02 12:12:37 +00:00
We will be using this representation.
2021-06-04 07:37:40 +00:00
In a nutshell, for each job executed on Mistral, they partitioned it into 10 minutes segments\footnote{We found in preliminary experiments that 10 minutes reduces compute time and noise, i.e., the variation of the statistics when re-running the same job.} and compute the arithmetic mean of each metric, categorize the value into NonIO (0), HighIO (1), and CriticalIO (4) for values below 99-percentile, up to 99.9-percentile, and above, respectively.
2021-06-03 17:02:24 +00:00
The values are chosen to be 0, 1, and 4 because we arithmetically derive metrics: naturally, the value of 0 will indicate that no I/O issue appears; we weight critical I/O to be 4x as important as high I/O.
2021-06-02 12:12:37 +00:00
This strategy ensures that the same approach can be applied to other HPC systems regardless of the actual distribution of these statistics on that data center.
After the mean value across nodes is computed for a segment, the resulting numeric value is encoded either using binary (I/O activity on the segment: yes/no) or hexadecimal representation (quantizing the numerical performance value into 0-15) which is then ready for similarity analysis.
2021-06-03 20:21:32 +00:00
By pre-filtering jobs with no I/O activity -- their sum across all dimensions and time series is equal to zero -- the dataset is reduced from 1 million jobs to about 580k jobs.
2021-06-02 12:12:37 +00:00
\subsection{Algorithms for Computing Similarity}
We reuse the B and Q algorithms developed in~\cite{Eugen20HPS}: B-all, B-aggz(eros), Q-native, Q-lev, and Q-phases.
2021-06-03 20:37:03 +00:00
They differ in the way data similarity is defined; either the time series is encoded in binary or hexadecimal quantization, the distance measure is the Euclidean distance or the Levenshtein distance.
2021-06-03 17:02:24 +00:00
B-all determines the similarity between binary codings by means of Levenshtein distance.
2021-06-02 12:12:37 +00:00
B-aggz is similar to B-all, but computes similarity on binary codings where subsequent segments of zero activities are replaced by just one zero.
2021-06-03 17:02:24 +00:00
Q-lev determines the similarity between quantized codings by using Levenshtein distance.
2021-06-03 20:21:32 +00:00
Q-native uses a performance-aware similarity function, i.e., the distance between two jobs for a metric is $\frac{|m_{\text{job1}} - m_{\text{job2}}|}{16}$.
%There are various options for how a longer job is embedded in a shorter job, for example, a larger input file may stretch the length of the I/O and compute phases; another option can be that more (model) time is simulated.
One of our basic considerations is that a short job may run longer, e.g, when restarted with a larger input file (it can stretch the length of the I/O and compute phases) or when run with more simulating steps.
2021-06-04 07:37:40 +00:00
There are more alternatives how a longer job is related to a shorter job but we do not consider them for now.
2021-06-03 20:21:32 +00:00
In this article, we consider these different behavioral patterns and attempt to identify situations where the I/O pattern of a long job is contained in a shorter job.
Therefore, for jobs with different lengths, a sliding-windows approach is applied which finds the location for the shorter job in the long job with the highest similarity.
2021-06-03 17:02:24 +00:00
Q-phases extracts phase information and performs a phase-aware and performance-aware similarity computation.
2021-06-02 12:12:37 +00:00
The Q-phases algorithm extracts I/O phases from our 10-minute segments and computes the similarity between the most similar I/O phases of both jobs.
\subsection{Methodology}
Our strategy for localizing similar jobs works as follows:
\begin{itemize}
\item A user\footnote{This can be support staff or a data center user that was executing the job.} provides a reference job ID and selects a similarity algorithm.
2021-06-03 17:02:24 +00:00
\item The system iterates over all jobs of the job pool, computing the similarity to the reference job using the specified algorithm.
2021-06-02 12:12:37 +00:00
\item It sorts the jobs based on the similarity to the reference job.
\item It visualizes the cumulative job similarity allowing the user to understand how job similarity is distributed.
\item The user starts the inspection by looking at the most similar jobs first.
\end{itemize}
The user can decide about the criterion when to stop inspecting jobs; based on the similarity, the number of investigated jobs, or the distribution of the job similarity.
For the latter, it is interesting to investigate clusters of similar jobs, e.g., if there are many jobs between 80-90\% similarity but few between 70-80\%.
2021-06-03 17:02:24 +00:00
For the inspection of the jobs, a user may explore the job metadata, search for similarities, and explore the time series of a job's I/O metrics.
2021-06-02 12:12:37 +00:00
2021-06-02 13:15:43 +00:00
\section{Reference Job}%
2021-06-02 12:12:37 +00:00
\label{sec:refjobs}
2021-06-03 17:02:24 +00:00
For this study, we chose the reference job called Job-M: a typical MPI parallel 8-hour compute job on 128 nodes that write time series data after some spin up. %CHE.ws12
2021-06-02 13:15:43 +00:00
The segmented timelines of the job are visualized in \Cref{fig:refJobs} -- remember that the mean value is computed across all nodes on which the job ran.
2021-06-02 12:12:37 +00:00
This coding is also used for the Q algorithms, thus this representation is what the algorithms will analyze; B algorithms merge all timelines together as described in~\cite{Eugen20HPS}.
The figures show the values of active metrics ($\neq 0$); if few are active, then they are shown in one timeline, otherwise, they are rendered individually to provide a better overview.
2021-06-02 13:15:43 +00:00
For example, we can see that several metrics increase in Segment\,12.
2021-06-03 20:37:03 +00:00
We can also see an interesting result of our categorized coding, the \lstinline|write_bytes| are bigger than 0 while \lstinline|write_calls| are 0\footnote{The reason is that a few write calls transfer many bytes; less than our 90\%-quantile, therefore, write calls will be set to 0.}.
2021-06-02 12:12:37 +00:00
\begin{figure}
\includegraphics[width=\textwidth]{job-timeseries5024292}
2021-06-02 13:15:43 +00:00
\caption{Segmented timelines of Job-M (runtime=28,828\,s, segments=48)}\label{fig:job-M}
2021-06-02 12:12:37 +00:00
\label{fig:refJobs}
\end{figure}
\section{Evaluation}%
\label{sec:evaluation}
2021-06-03 20:37:03 +00:00
In the following, we assume the reference job (Job-M) is given, and we aim to identify similar jobs.
2021-06-02 13:15:43 +00:00
For the reference job and each algorithm, we created CSV files with the computed similarity to all other jobs from our job pool (worth 203 days of production of Mistral).
2021-06-02 12:12:37 +00:00
During this process, the runtime of the algorithm is recorded.
Then we inspect the correlation between the similarity and number of found jobs.
Finally, the quantitative behavior of the 100 most similar jobs is investigated.
\subsection{Performance}
2021-06-02 13:36:29 +00:00
To measure the performance for computing the similarity to the reference job, the algorithms are executed 10 times on a compute node at DKRZ which is equipped with two Intel Xeon E5-2680v3 @2.50GHz and 64GB DDR4 RAM.
2021-06-02 12:12:37 +00:00
A boxplot for the runtimes is shown in \Cref{fig:performance}.
The runtime is normalized for 100k jobs, i.e., for B-all it takes about 41\,s to process 100k jobs out of the 500k total jobs that this algorithm will process.
2021-06-03 17:02:24 +00:00
Generally, the B algorithms are the fastest, while the Q algorithms often take 4-5x as long.
Q\_phases and Levenshtein-based algorithms are significantly slower.
2021-06-02 12:12:37 +00:00
Note that the current algorithms are sequential and executed on just one core.
2021-06-03 17:02:24 +00:00
They could easily be parallelized, which would then allow an online analysis.
2021-06-02 12:12:37 +00:00
\begin{figure}
2021-06-02 13:22:56 +00:00
\begin{subfigure}{0.47\textwidth}
2021-06-02 12:12:37 +00:00
\centering
2021-06-02 13:22:56 +00:00
\includegraphics[width=\textwidth]{progress_5024292-out-boxplot}
2021-06-02 13:15:43 +00:00
\caption{Runtime of the algorithms to compute the similarity to our reference job}%
2021-06-02 12:12:37 +00:00
\label{fig:performance}
2021-06-02 13:22:56 +00:00
\end{subfigure}
\qquad
\begin{subfigure}{0.47\textwidth}
\centering
\includegraphics[width=0.85\textwidth]{job_similarities_5024292-out/user-ids}
\caption{User information for all 100 top-ranked jobs. Each color represents a specific user for the given data.}
\label{fig:userids}
\end{subfigure}
\caption{Algorithm runtime and user distribution}
2021-06-02 12:12:37 +00:00
\end{figure}
\subsection{Quantitative Analysis}
2021-06-02 13:36:29 +00:00
In the quantitative analysis, we explore the different algorithms how the similarity of our pool of jobs behaves to our reference job.
2021-06-02 12:12:37 +00:00
The support team in a data center may have time to investigate the most similar jobs.
Time for the analysis is typically bound, for instance, the team may analyze the 100 most similar jobs and rank them; we refer to them as the Top\,100 jobs, and \textit{Rank\,i} refers to the job that has the i-th highest similarity to the reference job -- sometimes these values can be rather close together as we see in the histogram in
\Cref{fig:hist} for the actual number of jobs with a given similarity.
2021-06-03 17:02:24 +00:00
As we focus on a feasible number of jobs, we crop it at 100 jobs (the total number of jobs is still given).
2021-06-02 12:12:37 +00:00
It turns out that both B algorithms produce nearly identical histograms, and we omit one of them.
In the figures, we can see again a different behavior of the algorithms depending on the reference job.
2021-06-02 13:15:43 +00:00
We can see a cluster with jobs of higher similarity (for B-all and Q-native at a similarity of 75\%).
Generally, the growth in the relevant section is more steady.
2021-06-02 12:12:37 +00:00
Practically, the support team would start with Rank\,1 (most similar job, e.g., the reference job) and walk down until the jobs look different, or until a cluster of jobs with close similarity is analyzed.
\begin{figure}
\centering
2021-06-02 13:15:43 +00:00
\includegraphics[width=0.9\textwidth,trim={0 0 0 2.0cm},clip]{job_similarities_5024292-out/hist-sim}
2021-06-02 12:12:37 +00:00
\caption{Histogram for the number of jobs (bin width: 2.5\%, numbers are the actual job counts). B-aggz is nearly identical to B-all and therefore omitted.}%
\label{fig:hist}
\end{figure}
\subsubsection{Inclusivity and Specificity}
When analyzing the overall population of jobs executed on a system, we expect that some workloads are executed several times (with different inputs but with the same configuration) or are executed with slightly different configurations (e.g., node counts, timesteps).
Thus, potentially our similarity analysis of the job population may just identify the re-execution of the same workload.
2021-06-03 17:02:24 +00:00
Typically, the support staff would identify the re-execution of jobs by inspecting job names, which are user-defined generic strings.
2021-06-02 12:12:37 +00:00
To understand if the analysis is inclusive and identifies different applications, we use two approaches with our Top\,100 jobs:
We explore the distribution of users (and groups), runtime, and node count across jobs.
The algorithms should include different users, node counts, and across runtime.
To confirm the hypotheses presented, we analyzed the job metadata comparing job names which validate our quantitative results discussed in the following.
\paragraph{User distribution.}
2021-06-03 20:37:03 +00:00
To understand how the Top\,100 are distributed across users, the data is grouped by user ID and counted.
2021-06-02 12:12:37 +00:00
\Cref{fig:userids} shows the stacked user information, where the lowest stack is the user with the most jobs and the topmost user in the stack has the smallest number of jobs.
2021-06-03 17:02:24 +00:00
Jobs from 13 users are included; about 25\% of jobs stem from the same user; Q-lev and Q-native include more users (29, 33, and 37, respectively) than the other three algorithms.
2021-06-03 20:37:03 +00:00
We didn't include the group analysis in the figure as user count and group ID are proportional, at most the number of users is 2x the number of groups.
2021-06-02 12:12:37 +00:00
Thus, a user is likely from the same group and the number of groups is similar to the number of unique users.
\paragraph{Node distribution.}
\Cref{fig:nodes-job} shows a boxplot for the node counts in the Top\,100 -- the red line marks the reference job.
All algorithms reduce over the node dimensions, therefore, we naturally expect a big inclusion across the node range as long as the average I/O behavior of the jobs is similar.
2021-06-02 13:15:43 +00:00
We can observe that the range of nodes for similar jobs is between 1 and 128.
2021-06-02 12:12:37 +00:00
\paragraph{Runtime distribution.}
The job runtime of the Top\,100 jobs is shown using boxplots in \Cref{fig:runtime-job}.
2021-06-03 17:02:24 +00:00
While all algorithms can compute the similarity between jobs of different lengths, the B algorithms and Q-native penalize jobs of different lengths, preferring jobs of very similar lengths.
2021-06-02 13:15:43 +00:00
Q-phases is able to identify much shorter or longer jobs.
2021-06-02 12:12:37 +00:00
2021-06-03 20:21:32 +00:00
\begin{figure}[bt]
2021-06-02 12:12:37 +00:00
\centering
2021-06-02 13:15:43 +00:00
\begin{subfigure}{0.47\textwidth}
\includegraphics[width=\textwidth]{job_similarities_5024292-out/jobs-nodes}
\caption{Node counts ($job=128 nodes$)}%
\label{fig:nodes-job}
2021-06-02 12:12:37 +00:00
\end{subfigure}
2021-06-02 13:22:56 +00:00
\quad
2021-06-02 13:15:43 +00:00
\begin{subfigure}{0.47\textwidth}
2021-06-02 12:12:37 +00:00
\includegraphics[width=\textwidth]{job_similarities_5024292-out/jobs-elapsed}
2021-06-02 13:15:43 +00:00
\caption{Runtime ($job=28,828s$)}%
2021-06-02 12:12:37 +00:00
\label{fig:runtime-job}
\end{subfigure}
2021-06-02 13:15:43 +00:00
\caption{Distribution for all 100 top-ranked jobs}
2021-06-02 12:12:37 +00:00
\end{figure}
%%%%%%%%%%% %%%%%%%%%%% %%%%%%%%%%% %%%%%%%%%%% %%%%%%%%%%% %%%%%%%%%%% %%%%%%%%%%% %%%%%%%%%%%
\section{Assessing Timelines for Similar Jobs}%
\label{sec:timelines}
To verify the suitability of the similarity metrics, for each algorithm, we carefully investigated the timelines of each of the jobs in the Top\,100.
We subjectively found that the approach works very well and identifies suitable similar jobs.
2021-06-02 13:22:56 +00:00
To demonstrate this, we include a selection of job timelines and selected interesting job profiles.
Inspecting the Top\,100 is highlighting the differences between the algorithms.
2021-06-02 12:12:37 +00:00
All algorithms identify a diverse range of job names for this reference job in the Top\,100.
2021-06-03 17:02:24 +00:00
The number of unique names is 19, 38, 49, and 51 for B-aggzero, Q-phases, Q-native, and Q-lev, respectively.
2021-06-02 12:12:37 +00:00
When inspecting their timelines, the jobs that are similar according to the B algorithms (see \Cref{fig:job-M-bin-aggzero}) subjectively appear to us to be different.
2021-06-03 17:02:24 +00:00
The reason lies in the definition of the B-* similarity, which aggregates all I/O statistics into one timeline.
2021-06-02 12:12:37 +00:00
The other algorithms like Q-lev (\Cref{fig:job-M-hex-lev}) and Q-native (\Cref{fig:job-M-hex-native}) seem to work as intended:
While jobs exhibit short bursts of other active metrics even for low similarity, we can eyeball a relevant similarity particularly for Rank\,2 and Rank\,3 which have the high similarity of 90+\%. For Rank\,15 to Rank\,100, with around 70\% similarity, a partial match of the metrics is still given.
\begin{figure}[bt]
\begin{subfigure}{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{job_similarities_5024292-out/bin_aggzeros-0.7347--14timeseries4498983}
\caption{Rank\,15, SIM=73\%}
\end{subfigure}
2021-06-02 13:36:29 +00:00
%\begin{subfigure}{0.47\textwidth}
%\centering
%\includegraphics[width=\textwidth]{job_similarities_5024292-out/bin_aggzeros-0.5102--99timeseries5120077}
%\caption{Rank\,100, SIM=51\% }
%\end{subfigure}
\qquad
2021-06-02 12:12:37 +00:00
\begin{subfigure}{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{job_similarities_5024292-out/bin_aggzeros-0.7755--1timeseries8010306}
\caption{Rank\,2, SIM=78\%}
\end{subfigure}
\caption{Job-M with Bin-Aggzero, selection of similar jobs}%
\label{fig:job-M-bin-aggzero}
\end{figure}
\begin{figure}[bt]
\begin{subfigure}{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{job_similarities_5024292-out/hex_lev-0.9365--2timeseries5240733}
\caption{Rank 3, SIM=94\%}
\end{subfigure}
\begin{subfigure}{0.47\textwidth}
\includegraphics[width=\textwidth]{job_similarities_5024292-out/hex_lev-0.7392--15timeseries7651420}
\caption{Rank\,15, SIM=74\%}
\end{subfigure}
\begin{subfigure}{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{job_similarities_5024292-out/hex_lev-0.9546--1timeseries7826634}
\caption{Rank\,2, SIM=95\%}
\end{subfigure}
\begin{subfigure}{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{job_similarities_5024292-out/hex_lev-0.7007--99timeseries8201967}
\caption{Rank\,100, SIM=70\%}
\end{subfigure}
\caption{Job-M with Q-lev, selection of similar jobs}%
\label{fig:job-M-hex-lev}
\end{figure}
\begin{figure}[bt]
\begin{subfigure}{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{job_similarities_5024292-out/hex_native-0.9878--1timeseries5240733}
\caption{Rank 2, SIM=99\%}
\end{subfigure}
\begin{subfigure}{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{job_similarities_5024292-out/hex_native-0.9651--2timeseries7826634}
\caption{Rank 3, SIM=97\%}
\end{subfigure}
\begin{subfigure}{0.47\textwidth}
\includegraphics[width=\textwidth]{job_similarities_5024292-out/hex_native-0.9084--14timeseries8037817}
\caption{Rank 15, SIM=91\%}
\end{subfigure}
\begin{subfigure}{0.47\textwidth}
\centering
\includegraphics[width=\textwidth]{job_similarities_5024292-out/hex_native-0.8838--99timeseries7571967}
\caption{Rank 100, SIM=88\%}
\end{subfigure}
\caption{Job-M with Q-native, selection of similar jobs}%
\label{fig:job-M-hex-native}
\end{figure}
\section{Conclusion}%
\label{sec:summary}
2021-06-02 13:36:29 +00:00
We introduced a methodology to identify similar jobs based on timelines of nine I/O statistics.
The quantitative analysis shows that a diverse set of results can be found and that only a tiny subset of the 500k jobs is very similar to our reference job representing a typical HPC activity.
2021-06-02 12:12:37 +00:00
The Q-lev and Q-native work best according to our subjective qualitative analysis.
2021-06-03 17:02:24 +00:00
Related jobs stem from the same user/group and may have a related job name, but the approach was able to find other jobs as well.
This was the first exploration of this methodology.
In the future, we will expand the study by comparing more jobs in order to identify the suitability of the methodology.
2021-06-02 12:12:37 +00:00
2021-06-03 20:21:32 +00:00
%\FloatBarrier
2021-06-02 12:23:37 +00:00
\printbibliography%
2021-06-02 12:12:37 +00:00
\end{document}