Design an experiment to determine the best threshold size


Assignment: Sort Wars

If quicksort is so quick, why bother with anything else? If bubble sort is so bad, why even mention it? For that matter, why are there so many sorting algorithms?

Your mission (should you choose to accept it) is to investigate these and other questions in relation to the algorithms selection sort, insertion sort, merge sort, and quicksort.

Core Questions

1. Explain each of the algorithms in a way that would be understandable to an intelligent person who is not familiar with programming. You should not use any code (or even pseudo code) in your explanation, but you will probably need to use general concepts such as "compare" and "swap", and you'll certainly need to use procedural words such as "if" and "repeat".

You might find it helpful to consider an algorithm as if it were a game for which you need to define the rules. For example, here's how you could describe the bubble sort algorithm as if it was a solitaire game played with a deck of cards that contain the values to process.

2. Write a set of guidelines for helping a fellow programmer decide which sort algorithm would be most appropriate for a particular situation. Include in your guidelines a description of the advantages and disadvantages of each algorithm, together with an indication as to why those characteristics apply. Your goal is to provide enough information so that someone not familiar with the details of each algorithm would be able to decide which algorithm is right for them.

For example, if someone was considering using counting sort, then the following brief information could help decide if it was appropriate.

Algorithm

Counting Sort

Description

  • Count the number of times each different value appears, then overwrite the values back into the list in lowest-to-highest order, with each value repeated according to the counts. For example, if the value 42 appears 5 times, then you would write 42 into the sorted list 5 times.

Advantages

  • Usually faster than any of the comparison-based sorts. Algorithmic complexity is O(n + k), irrespective of data order, where n is the list length and k is the number of distinct values that might occur. Typical case is where k << n, in which case cost is O(n).
  • Simple to code.

Disadvantages

  • Only usable where the values to be sorted can be used to index an array of value counts, which usually means the values are integers over a small range. In other words, the algorithm can't be used to sort common non-integral values such as strings and floats, and it's inappropriate even for integers if the range of values is large.
  • Requires an auxiliary array (to store the counts) of size equal to the number of different possible sort values. If the range of values is large, the cost of allocating and maintaining this array could be significant.

When to use

  • If your circumstances allow, it's hard to beat this algorithm. But because it places very tight restrictions on the nature of the data to sort, you will often have to choose another approach.

Questions

In this section, you'll need to be able to measure the speed of execution of parts of your code. On a Unix-based system, you can measure how much time a section of code takes by calling the system function getrusage before and after that section. The function returns information about various aspects of resource usage, including the amount of system time (time taken by system routines that you call) and the amount of user time (time taken by your own code). Note that this is process time, not "wall-clock" time, so it's an accurate measure even if the system is busy executing other people's code as well. Consult the documentation for getrusage if you need more information.
#include
intmain() {
struct rusage before, after; // for recording usage stats
// do any needed initialisations getrusage(RUSAGE_SELF, &before); // execute the code you want to time getrusage(RUSAGE_SELF, &after);
int secs = after.ru_utime.tv_sec - before.ru_utime.tv_sec; intusecs = after.ru_utime.tv_usec - before.ru_utime.tv_usec; cout<< secs * 1000000 + usecs<Practical sort implementations usually combine more than one sorting algorithm, attempting to take advantage of the best characteristics of each. For example, a straightforward but effective approach for general-purpose sorting is to use quicksort with the recursion stopping when the partitions reach a threshold size, then a final insertion sort pass to complete the process. The structure of the hybrid sort would look like this:
sort (list) {
sort list with truncated quicksort sort list with insertion sort
}
truncated quicksort (list) {
if list size is greater than threshold {
partition list
recursively sort first part with truncated quicksort recursively sort second part with truncated quicksort
} }

This approach is generally faster that using pure quicksort because insertion sort has a lower overhead than quicksort and is thus faster, provided the elements in the list are not far from their correct positions. To get the greatest speedup, the threshold for truncating the quicksort needs to be carefully chosen: too large, and the greater algorithmic cost of the insertion sort will overwhelm any lower overheads; too small, and the potential benefits of the combined approach are wasted.

3. Design an experiment to determine the best threshold size for the combined "quicksort-plusinsertion-sort" implementation. You'll need to consider a range of data sizes, including both random and "worst-case" data sets.

Write a program that could be used to perform the experiment. You'll need to provide the sort code itself (use your code from prac 5) as well as a suitable main function for testing it (adapt the main function from prac 5).

Your experimental design should be sufficiently detailed that you could hand the task over to a tester who is not familiar with sorting algorithms or even with programming. Ideally, the tester should only need to run the program under specified conditions and record the results.

4. Run your experiment and report on the findings. Your report should include the data you gather, an analysis of that data, and a clear recommendation as to the best cutover threshold.

Consider how best to present your results. You'll certainly want to tabulate the data, but you might also find it helpful to plot it as well. Because the actual times will be heavily dependent on the data size, you might find it useful to normalise the times against the "ideal" time (by dividing by n log n) before plotting them.

Request for Solution File

Ask an Expert for Answer!!
C/C++ Programming: Design an experiment to determine the best threshold size
Reference No:- TGS01667219

Expected delivery within 24 Hours