Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

CSCI 2210 Research Paper: Analyzing Algorithm Performance using Profiling Tools - Prof. Go, Papers of Data Structures and Algorithms

An assignment for a computer science course where students are required to write a program that fills and sorts arrays using various sorting algorithms, profile the execution, and analyze the results. The assignment aims to help students learn to use profiling tools to understand algorithm performance and draw conclusions based on the data. Students are expected to repeat the process with different array sizes and data characteristics, record the results in tabular format, and produce graphs for comparison. The final deliverables include a program iteration, spreadsheets, and a report of observations and conclusions.

Typology: Papers

Pre 2010

Uploaded on 08/18/2009

koofers-user-wvc-1
koofers-user-wvc-1 🇺🇸

10 documents

1 / 2

Toggle sidebar

Related documents


Partial preview of the text

Download CSCI 2210 Research Paper: Analyzing Algorithm Performance using Profiling Tools - Prof. Go and more Papers Data Structures and Algorithms in PDF only on Docsity! CSCI 2210 – Research Paper Purpose The purpose of this assignment is to learn to analyze algorithms using a profiling tool and report on your conclusions. Specifications Write a short program that fills six arrays of 10 integers with the same random integer values, sorts them using the Sink, Selection, Insertion, QuickSort (original), QuickSort (median-of-three), and Shell (gap determined using 2.2) routines. The algorithms and code are given in the slides and/or in the course notes. In successive executions of the program, repeat the process with data that has the following characteristics.  Already in order  In reverse order  Constant - every value the same  Almost in order – about a random 10% of the values out of order  Almost random data – about a random 10% in order. Note: Of the six data organizations mentioned here, four of them are either “in-order (forward or reversed)” or almost so. Only two of the six are random or nearly random. Consider this when drawing conclusions. Now repeat the entire process with arrays of 100 integers and then with arrays of 1000 integers. Finally, repeat the process with 5000 integers in the arrays. Because of the recursive nature of some of the routines and the expanding array sizes, you may reach a point where increasing N causes an error due to resource limitations of your machine. If so, don’t try to resolve the problem – just stop there. For each run of the program, profile the execution, paying particular attention to the performance of the sort routines. Record the profiles in tabular format to make it easy to compare the profiles between the routines, the different values of N, and the different data set characteristics. Use a spreadsheet program (e.g., Excel) to record, format, and analyze your data. Every table, row, and column must be labeled with a meaningful label. Use the spreadsheet program to produce a meaningful graph showing comparisons of the algorithms for each data set. In analyzing the data, you are looking for patterns so that you can draw conclusions such as those illustrated by the following statements. The phrases in the list are not the particular observations expected, but they should give you suggestions as to the types of patterns you are seeking. If you see a pattern, you may want to test it with other data to verify the conclusions.  The difference between the run time for the Insertion and Sinking sorts < .0001 second in all cases  The Quick Sort is always faster than the Selection Sort on random data but the Selection Sort is faster on data that is already sorted  If the number of entries goes up by a factor of 5, the Quick Sort takes 52 times as long  The Insertion Sort is useful for data sets that are almost ordered but is not good for random data  The Sinking Sort works well for small data sets but not for data sets of 1000 or more items Look for patterns involving the same algorithm with different values of N, different algorithms with the same data sets, and the same algorithm with data sets that have different characteristics but the same N. November 30, 2020 Page 1 of 2
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved