Create a simple system to read merge purge sort search and


User Request: "Create a simple system to read, merge, purge, sort, search, and write eclipse data with error checking and providing fast lookup by catalog number."

Objectives:

1. Use C++ file IO to read and write files, while using C++ standard I/O (cin and cout) for user interaction, using appropriate exception handling and giving appropriate error messages.

2. Encapsulate primitive arrays inside a templated class that provides controlled access to the array data, retains information on array capacity and use, and can be used to store data of any class or primitive type. Integrate appropriate exception handling into the templated array class.

3. Efficiently sort and search the data based on the field specified by the user. Integrate appropriate exception handling into classes that implement searching and sorting.

4. Store eclipse data in a linked list maintained in order, sorted by catalog number. Remove entries from the eclipse linked list while maintaining its order. Integrate appropriate exception handling into classes that implement linked lists.

5. Use a linked hash table with eclipse catalog numbers as keys for efficient retrieval of eclipse data based on catalog number.

6. Encapsulate linked hash tables inside a templated class that provides controlled access to the linked hash table data, retains information on linked hash table capacity and load factor, and can be used to store data of any class or primitive type.

7. Integrate appropriate exception handling into the templated linked hash table class.

8. Provide a design document explaining and justifying implementation choices for your linked hash table. 7 Use a sophisticated hash function and/or collision resolution method.

9. Develop and use an appropriate design.

10. Use proper documentation and formatting.

Description:

For this project, you will revise and improve project 2(included as an attachment) from Project 2 in one important way. You are encouraged to reuse and build on your code from Project 2.

Project 3 will have the same basic functionality as project 2 but it will have one major change "under the hood"- because it is believed that users will most often want to search for eclipse data by catalog number, the list of catalog numbers will be used as keys to a hash table that stores pointers to the associated data. This will allow for data to be looked up by catalog number in constant time, that is, in Θ(1) time. (Note that project 3 will still store the list of eclipse data using linked list data structures during merging and purging and will still store the list of eclipse data using a resizable array during sorting and searching using fields besides catalog number.)

Operational Issues:

From a user interface perspective, project 3 will behave as described for project 2, except that there will be two additional data printing/display option, as follows.

If the user enters 'H' for Hash display, project 3 will list the eclipse data, one eclipse per line, in the order they are stored in the hash table, each prepended with the bucket number (hash code) where it is stored. If a bucket is empty, the bucket number should still be displayed, followed by "NULL." If a bucket contains more than one item, each should be listed in its order within the bucket, one per line. If a bucket overflows somewhere besides another bucket, the overflow items should be prepended with "OVERFLOW:" and be displayed in their overflow order before proceeding to the next bucket. There should also be a blank line displayed between buckets. Note that this is thought of as a debug display as it is unlikely to be of use to an end user but may help you to debug your project. If the user enters 'L' for Linked display, project 3 will list the eclipse data, one eclipse per line, in the order they are stored in the linked list embedded in the hash table. Note that this should be the same order as seen with the 'C' (catalog number) option that runs through the linked list used for merging and purging.

Implementation Issues: In most areas, project 3 will be implemented just as was project 2. This includes how the project reads files and prints data, carries out user interaction via standard in and standard out, encapsulates C  primitive arrays, how exception handling is implemented for arrays and similar classes, and how the list of eclipse data is stored in a linked list when data is being merged and purged. The big implementation change will be the data structure used in the code to find eclipse data based on catalog number. For project 3, you are no longer allowed to sort and search by catalog number using the array, you need to use a hash table instead. For this reason, you should use a linked hash table where the linked list interwoven into the table is ordered by catalog number. Beyond these requirements, you have a lot of freedom to design your hash table implementation. For this reason, you should include an design document with your submission. Please make this a PDF file and name it "design.pdf" in your submission. In this document, you should describe the hash function you chose, the bucket size you chose for your hash table, the hash collision resolution strategy that you chose, and the load factor you chose, along with design justifications for each. You are free to use any hash function you wish, so long as it results in hash codes within the bounds for your hash table. If you choose to implement a hash function that distributes keys close to uniformly and randomly throughout the table, you should note this in your documentation for the TA, so that he can assign you bonus points (up to 5 points, depending on the sophistication of the hash function used and your explanation of why its distribution is close to uniform and random).  You are free to use any bucket size for your hash table and any hash collision resolution strategy you wish. You should justify both of these design choices, noting that some bucket sizes may be more appropriate for some collision resolution strategies and vice versa. If you choose to implement a particularly sophisticated collision resolution strategy, you should note this in your documentation.

(Explain why it is appropriate for your hash table).

Note that when you first create your hash table, you will know the amount of data it is to contain. This is because you can wait to create it until you have read in the first file of data. Similarly, when you merge or purge data, you can use your adjusted list to create a new hash table for the adjusted amount of data. However, you should not attempt to fill your hash table completely, because this will almost surely cause too many collisions and degrade its performance below that of a sorted array. Instead, you should choose an appropriate load factor for your table. If you choose to do so, you may use the linked list of the hash table for the merge/purge linked list. If you choose to do this, be sure to note it in the design document. You should implement the display/printing options as follows. C should be accomplished by traversing the linked list used for merging and purging. O should be accomplished by traversing the array used for searching and sorting. L should be accomplished by traversing the linked list embedded within the hash table. (Note that this might be the same linked list as for option C.) H should be accomplished by traversing the hash table in bucket order. If your buckets are larger than size one, each bucket should be traversed in order before proceeding to the next bucket. If your hash table overflows to anything besides other buckets, the overflow data structures should also be traversed in order before proceeding to the next bucket.

The only libraries you may use for this assignment are iostream, iomanip, string, and fstream(#include , #include , #include ,#include ).

Attachment:- Assignment Files.rar

Request for Solution File

Ask an Expert for Answer!!
Computer Engineering: Create a simple system to read merge purge sort search and
Reference No:- TGS02522128

Expected delivery within 24 Hours