With the setup described in the prevoius chapter, we created a system which is able to capture and process biometric data.
The system encapsulates this data into an attestation message and sends it to the PIA which is the DAA verifier.
We show in the following section how well each part of the setup work.
Furthermore we analyze the footprint in memory and on the disk as well as timing if applicable.
The tests are only applied to System 1 and 3 since System 2 has a comparable hardware configuration but has a processor which is one generation older.
The system encapsulates this data into an attestation message and sends it to the PIA which is the DAA verifier in this configuration.
We show in the following how well each part of the setup work and which performance the tested devices show.
Furthermore we analyze the footprint in memory as well as on the disk.
The tests are only applied to System 1 and 3 since System 2 has a comparable hardware configuration to System 3 but uses a CPU of an older generation.
Furthermore, only two TPMs support the cryptographic parts required for ECDAA.
We discuss this issue in further detail in \autoref{sec:limitations}.
Consequently, System 2 was used as DAA verifier since this host does not require a TPM.
@ -106,12 +106,12 @@ We split the tasks of a Digidow sensor in several parts to document the conrtibu
\item\emph{Digidow sensor embed}: Extract a face embedding using the tensorflow application \texttt{img2emb}.
\item\emph{Digidow sensor collect}: Collect the IMA log and save it to disk.
Create a sha512sum of the file and put it together with all PCRs and the face embedding data into one message.
Calculate another sha512sum from the message itself.
Calculate another sha512sum from the message itself and save it to disk.
\item\emph{Digidow sensor send}: Sign the message's hash with the TPM DAA key and send it together with the message to the DAA verifier.
The verifier saves message and hash for further procedures on its disk.
\end{itemize}
The application's memory usage is measured with\texttt{valgrind}.
First, we look into the memory footprint of each part by executing them via\texttt{valgrind}.
It measures the allocated heap space in memory which is shown in \autoref{tab:memoryusage}.
\begin{table}
\renewcommand{\arraystretch}{1.2}
@ -135,97 +135,110 @@ It measures the allocated heap space in memory which is shown in \autoref{tab:me
The memory usage is constant over all procedures but creating the DAA message itself.
This step's memory footprint depends on the size of the files which it summarizes, especially when taking the IMA log into account.
In this case the memory usage is measured while IMA is off, representing a lower bound of memory usage for this part.
Besides calculating the face embedding of the captured image, the whole transaction can be executed using about 1\,MB of heap memory.
This would be fit on most embedded devices running a Linux kernel.
Besides calculating the face embedding of the captured image, the whole transaction can be executed using about 1.2\,MB of heap memory.
This would fit on most embedded devices running a Linux kernel.
However, the face embedding algorithm uses over 1.3\,GB and requres the majority of the computation time as shown below.
The slight difference between the two systems at the processing part seems to be consistent over several runs.
\autoref{tab:wholeperformance} shows each relevant step for the Digidow sensor.
\autoref{tab:wholeperformance} shows the time consumption for each relevant task for the Digidow sensor with its minimum, average and maximum results over 10000 runs.
\begin{table}
\renewcommand{\arraystretch}{1.2}
\centering
\caption{Performance results of joining a DAA group and sending a Digidow transaction message (n=10000)}\label{tab:wholeperformance}
Each part shows the time of execution of the first attempt and minimum, average and maxmum time consumption of the 9999 remaining executions.
This first execution is done directly after a system reboot.
Therefore all resources besides the program itself must be loaded into main memory.
The \emph{first} run is stated separately since it is done immediately after a system reboot where the resources cached by the kernel are not loaded yet.
Depending on the number of resources a single step needs, the overtime might be smaller or larger.
When IMA is enabled, the kernel has to check the hash of each file accessed for reading.
This hash must be extended into PCR 10 which makes the first run of each part significantly longer.
Especially the tensorflow application requires about 35\,s more time for the first run.
Especially the tensorflow application requires significantly more time for the first run.
With IMA set to enforcing, the kernel furthermore manages access to the file asked to read.
This decision does not require additional resources compared to the fixing mode.
The file must be hashed in any case.
As long as the file \emph{integrity} is intact, PCR 10 and the IMA log file have to be written as well.
Consequently, the difference between fixing and enforcing mode is to compare the computed filehash with the value in the extended attributes and the decision depending on that result.
Since IMA measures every loaded resource, the corresponding log file will constantly increase during testing.
Unfortunately the IMA log is required to collect the data for the Digidow attestation message.
Furthermore, it is clear that every Digidow transaction contributes to the log since the handover beweent the single tasks is done based on files.
Consequently, we expected a runtime dependent on the number of runs and the average and maximum runtime in \autoref{tab:wholeperformance} remains unavailable when IMA is enabled.
The graphs of \autoref{fig:time-digidow-transaction} show the runtime of each of the runs on both tested systems and with IMA in fixing or enforcing mode respectively.
\caption{Time consumption of a Digidow transaction on the tested systems}
\label{fig:time-digidow-transaction}
\end{figure}
The following 4 blocks represent the 4 steps for starting the Digidow transaction.
Capture takes a picture of the user
When IMA is in fixing or enforcing mode, the corresponding log will be filled according to \autoref{tab:imalogentries}.
Each run is split into the four parts of a Digidow transaction.
The graphs clearly show that our expectation of a linear relation between runtime and number of runs were not satisfied.
It seems that collecting the IMA log has at least the complexity of $O(n^2)$.
Furthermore, it is interesting, that System 1 with the newer AMD processor seems to be faster in the beginning.
When the number of runs reach 10000, the system need significantly more time than System 3 with the Intel processor.
Since the software setup on both systems is comparable (Kernel version, Linux distro, installed programs, setup with respect to \autoref{cha:implementation}), the reason for this difference can probably be found either in the microarchitectural implementation or in (less) optimized code for the AMD CPU.
When IMA is in fixing or enforcing mode, the corresponding log will be filled with information about every accessed file.
The numbers in \autoref{tab:imalogentries} are taken from the IMA log after 10000 Digidow transaction tests.
IMA was set to enforcing and the DAA member key was already in the TPM.
\begin{table}
\renewcommand{\arraystretch}{1.2}
\centering
\caption{Number of additional entries in the IMA log}\label{tab:imalogentries}
\begin{tabular}{rl}
\caption{Number of additional entries in the IMA log}
\label{tab:imalogentries}
\begin{tabular}{lrr}
\toprule
&\textit{Additional log entries}\\
\textit{no. of entries}&\textit{System 1}&\textit{System 3}\\
\midrule
\textbf{Boot}&\textasciitilde 2000\\
\textbf{DAA TPM enrollment}&\\
\textbf{Image capturing}&\\
\textbf{Image processing}&\\
\textbf{Compiling DAA message}&\\
\textbf{Sending DAA message}&\\
\textit{Root login after boot}&1912 \\
\textit{Digidow sensor capture}&5 \\
\textit{Digidow sensor embed}&2561\\
\textit{Digidow sensor collect}&6 \\
\textit{Digidow sensor send}&12\\
\textit{Every other Digidow transaction}&5\\
\bottomrule
\end{tabular}
\end{table}
This means---given that the (very slow) hardware TPM had to extend PCR 10 for every line in the log---the slowdown is mainly caused by the interaction with the TPM itself.
Since the IMA log file is also essential for remote attestation, the information of this file must be transmitted to the DAA verifier.