Given the quality of prediction is reasonably good with small model size,
we focus on the run-time and throughput of the system. We recorded the
time required to generate predictions for the entire test set and plotted
them in a chart with varying model size. We plotted the run time at different
*x* values. Figure 8 shows the plot. Note here that at *x*=0.25 the
whole system has to make prediction for 25,000 test cases. From the plot
we observe a substantial difference in the run-time between the
small model size and the full item-item prediction case. For *x*=0.25the run-time is 2.002 seconds for a model size of 200 as opposed
to 14.11 for the basic item-item case. This difference is even more
prominent with *x*=0.8 where a model size of 200 requires only
1.292 seconds and the basic item-item case requires 36.34seconds.

These run-time numbers may be misleading as we computed them for
different train/test ratios where the work-load size i.e., number of
predictions to be generated is different (recall that at *x*=0.3 our
algorithm uses 30,000 ratings as training data and uses the rest of
70,000 ratings as test data to compare predictions generated by the
system to the actual ratings). To make the numbers comparable we compute the
throughput (predictions generated per second) for the model based and
basic item-item schemes. Figure 8 charts these results. We
see that for *x*=0.3 and at a model size of 100 the system generates
70,000 ratings in 1.487 seconds producing a throughput rate of 47,361whereas the basic item-item scheme produced a throughput of 4961only. At *x*=0.8 these two numbers are 21,505 and 550respectively.