FIT1008 FIT2085 Practice3 课业解析

FIT1008 FIT2085 Practice3 课业解析

题意:

分为7个task:

task1:使用python实现一个哈希表类,使用线性探测解决冲突:需要实现的函数有:init、getitem、setitem等函数。

task2:实现几个函数,能够从文件中读取数据,然后存储到哈希表当中,并且记录从开始读取到现在的时间,如果超出最大的时间,则返回异常。

task3:实现一个函数,统计哈希表中的冲突数量,probe-chain的数量等。 

解析:

主要涉及哈希表的概念和解决冲突的方法,在实现上先建立一个数组,在将数据放入哈希表的时候检查该位置上是否有冲突,若有则需要记录冲突数量,并查看下一位置是否冲突。

涉及知识点:

python,哈希表,线性探测

更多可+薇❤:qing1119X

pdf

FIT1008 Introduction to Computer Science (FIT2085: for Engineers)

Interview Prac 3 - Weeks 11 and 12

Semester 2, 2019

Learning objectives of this practical session

To be able to implement and use hash tables in Python.

Important requirements

For this prac, you are required to:

• Create a new file/module for each task named task[num].py (that is, task1.py, task2.py,

and so on).

• Provide documentation for each basic piece of functionality in your code (see below)

• The documentation needs to include (in the docstring):

{ pre and post conditions

{ Big O best and worst time complexity

{ information on any parameters used

• Provide testing for those functions/methods explicitly specified in the questions. The

testing needs to include:

{ a function to test it

{ a comment indicating the purpose of each test

{ at least two test cases per test function. These cases need to show that your

functions/methods can handle both valid and invalid inputs.

{ assertions for testing the cases, rather than just printing out messages

Your tests should be added to the corresponding test_task[num].py file provided

• VERY IMPORTANT: As before the code cannot use any of the Python functionality

that would make the job trivial (like dictionaries), but this time your are allowed to use

any Python list or string functionality you want.

Supporting Files

To get you started, you will find some supporting files on Moodle under Week 11:

• HashTable.py, Freq.py: Skeleton files for the HashTable and Freq classes, which you may use as

starting points.

• test_prac3.py, test_task[num].py, test_common.py: Testing harnesses for each of the tasks.

• Additional text files for testing your dictionaries (and used by the testing harness).

Your code will be tested with an extended version of this harness, so make sure your code can run

inside the harness. To make the testing work, we have made assumptions about the naming of instance

variables, so please follow the naming used in the skeletons. Your modules should guard any computation/printing/etc. for the tasks with if __name__ == ’__main__’:, so they can be safely imported by

the testing harness.

The harness you have been provided is not exhaustive, so don’t rely on it to ensure correctness!

Remember to add your own tests, as well.

Important: Checkpoint for Week 11

To reach the check point you must complete Tasks 1, 2 and 3 by the end of your lab in Week 11, and

submit all files corresponding to these tasks before leaving the lab. Remember: reaching the checkpoint

is a hurdle.

Task 1 6 marks

Define a class HashTable that implements a hash table (accepting only strings as keys), using Linear

Probing to resolve collisions. As in Prac 2, you should use a Python list to represent the underlying array.

Include implementations for the following methods:

• __init__(self, table_capacity, hash_base): Creates a new hash table, with initial table capacity table_capacity, and with using base hash_base for the hash function (see hash below).

If table_capacity or hash_base are not specified, the hash table should use appropriate default

values.

• getitem (self, key): Returns the value corresponding to key in the hash table. Raises a

KeyError if key does not exist in the hash table. Remember that this can be called from Python

code as table[key]

• setitem (self, key, value): Sets the value corresponding to key in the hash table to be value.

If the hash table is full and the key does not exist in the table yet, it first calls the rehash method

(see below) and then reinserts the key and value. Called from Python as table[key] = value

• contains (self, key): Returns True if key is in the table and False otherwise.

• hash(self, key): Calculates the hash value for the given key, using the uniform hash function

specified in lecture 27 (page 35) with the base hash_base given on table creation.

• rehash(self): It first creates a new array using as size the smallest prime number in the Primes

list below that is larger than twice the current size. It then updates the size of the hash table

and reinserts all key-value pairs in the old array into the new array using the new size. Raises a

ValueError if there is no such prime in the list. For now, this method is only called when the table

is full and we want to insert an element.

Primes = [ 3, 7, 11, 17, 23, 29, 37, 47, 59, 71, 89, 107, 131, 163, 197, 239, 293, 353, 431, 521, 631, 761,

919, 1103, 1327, 1597, 1931, 2333, 2801, 3371, 4049, 4861, 5839, 7013, 8419, 10103, 12143, 14591,

17519, 21023, 25229, 30313, 36353, 43627, 52361, 62851, 75521, 90523, 108631, 130363, 156437,

187751, 225307, 270371, 324449, 389357, 467237, 560689, 672827, 807403, 968897, 1162687, 1395263,

1674319, 2009191, 2411033, 2893249, 3471899, 4166287, 4999559, 5999471, 7199369]

The file test_task1.py defines tests for some of the HashTable methods. But be warned: these tests

are not comprehensive, and it is in your interest to think about what kinds of inputs we have not covered.

Important: you must add your own tests for the __setitem__ and rehash methods.

Task 2 5 marks

• Write a function load_dictionary(hash_table, filename, time_limit) that reads a file filename

containing one word per line, and adds each word to hash_table with integer 1 as the associated

data. time_limit is an optional argument { if it is specified and loading the dictionary takes more

than time_limit seconds, the function should immediately raise an exception.

Important: you must add your own tests for the load_dictionary method, using one or more very

small dictionary files to do so.

• Write a function load_dictionary_time(hash_base, table_size, filename, max_time) that

creates a new hash table with base hash_base and table_size, and returns the tuple (words,time),

where words is the number of (distinct) words added to the table, and time is the time taken (in

seconds) for load_dictionary to complete if less or equal than max_time, and None otherwise.

• Download from Moodle the dictionary files english small.txt, english large.txt and french.txt.

Write a Python function table_load_dictionary_time(max_time) that does the following: for each

of these dictionaries and each combination of the values specified in the table below for TABLESIZE

and b in the universal hash function, uses load_dictionary_time to time how long it takes for

load_dictionary to run and prints a line to file output task2.csv containing the name of the dictionary, the TABLESIZE and b used, the number of words added to the hash table, and either the

time taken to run load_dictionary for that combination, or TIMEOUT if the time exceeded max_time.

b TABLESIZE

1 250727

27183 402221

250726 1000081

Note that the above table gives a total of 3*3=9 combinations per dictionary file. Nested for loops

would be a good way to implement these combinations, and you are allowed to use "in" for this.

• Execute table_load_dictionary_time(max_time) for max_time = 120 (i.e., 2 minutes) and use

the information in the newly created file output task2.csv to graph the time taken for each file

combination (i.e., a graph where the x axis has the 9 combinations, the y has the time, and each

combination shows 3 columns, one per file { an example is given at the end of the prac sheet), using

the same methodology as you used in the code review prac of week 5 (giving TIMEOUT the value

max_time+10). Create also an explanation task2.pdf file that contains this graph together with

a short analysis regarding what values work best and which work poorly, and an explanation of why

these might be the case.

IMPORTANT: The dictionary files use the utf-8 encoding. Depending on your system configuration, you may need specify this when you open the file (e.g. open(a_file, ’r’, encoding=’utf-8’)).

Note: If you are playing around with no time out, you can use Ctrl+c to stop execution of your

program if some combination of values takes too long.

Task 3 4 marks

When discussing open addressing, we introduced two concepts: that of collision, which occurs when we

attempt to insert a key, but the location for that key is already occupied by a different key); and that of

probe chain, which is the sequence of positions we inspect before finding an empty space.

Consider inserting keys s1; s2 and s3 into an empty table with linear probing, assuming all three hash

to location 10. When we insert s1, location 10 is empty, so there is no conflict. If we then insert s2,

location 10 is full, but 11 is empty. This is a conflict, with probe chain length 1. When we insert s3, 10

and 11 are full, but 12 is free. This is one conflict, with probe chain length 2.

• Extend your HashTable class with a method statistics(self), which returns a tuple (collision count,

probe total, probe max, rehash count), consisting of: (a) the total number of collisions, (b) the

total length of the probe chains, (c) the length of the longest probe chain, and (d) the number of

times rehash has been called.

Important: you must add your own tests for your statistics method.

• Write a function load_dictionary_statistics(hash_base, table_size, filename, max_time)

which does the same as load_dictionary_time (from Task 2), but returns the following tuple:

(words, time, collision_count, probe_total, probe_max, rehash_count), where the tuple

elements represent the same information described above.

• Using the function above, write a function table_load_dictionary_statistics(max_time) that

is similar to that in Task 2, but also prints the four counters above in the output task3.csv file.

• Execute table_load_dictionary_statistics(120) and create an explanation task3.pdf file

that contains the associated graph and a comment regarding the relationship between these counters

and the runtime, and regarding the length of the longest probe chain and the promise of O(1) time

complexity. Explain why rehash_count is 0 in all runs (it should be!) Again, remember to include

your csv and pdf files in your submission. Hint: To avoid duplicating this code later in the prac, you

may wish to pass the HashTable class as an optional argument to load_dictionary_statistics.

CHECKPOINT

(You should reach this point during week 11)

Task 4 3 marks

Modify your hash table from the previous task to use Quadratic Probing to resolve collisions. Repeat the

analysis from Task 3 on your Quadratic Probing hash table, and add to your explanation task4.pdf file

the graph associated with the table in output task4.csv together with a comparison of the running time

and reported statistics against those found when using Linear Probing.

Hint: If you followed the previous hint, you should not need to modify load_dictionary_statistics.

Task 5 4 marks

Implement a hash table that uses Separate Chaining to resolve collisions, where the linked data structure

used is a Binary Search Tree (we will see this data type in Week 11, and you will be provided with an

implementation of the Binary Search Tree, so do not worry about this just yet).

Note that the content of the array cells should initially be None. Whenever you need to add a

(key,data) pair in that cell, you should create an empty binary search tree (say my_tree = BinarySearchTree()),

then insert the (key,data) into my_tree, and then put my_tree in the cell.

Once this is done, compare again the runtime and value of the counters obtained in explanation task5.pdf

for Separate Chaining against those obtained for Quadratic and for Linear Probing.

Background

In most large collections of written language, the frequency of a given word in that collection is inversely

proportional to its rank in the words. That is to say: the second most common word appears about half

as often as the most common word, the third most common word appears about a third as often as the

most common word and so on1.

If we count the number of occurrences of each word in such a collection, we can use just the number

of occurrences to determine the approximate rank of each word. Taking the number of occurrences of the

most common word to be max and the relationship described earlier, we can give a rarity score to each

word:

• Any word that appears at least max=100 times is considered to be common (score 0).

• Any word that appears less than max=1000 times is considered to be rare (score 2).

• Any word in between (less than max=100 and greater or equal to max=1000) is considered to be

uncommon (score 1).

• Any word which has never been observed is a misspelling (score 3).

In the following we will use a hash table to facilitate a frequency analysis on a given text file.

Task 6 4 marks

Create a new class Freq that uses the Linear Probing hash table to perform a frequency analysis of a set

of files as follows. First, add to this class the method add_file(self,filename), which reads each word

from the file into the hash table, in such a way that the data associated to the word is its \occurrence

count", i.e., the number of times the word has been \added" to the hash table (which is the same as the

number of times it has already appeared in the file). The class (and its methods) must keep track of the

number of occurrences max for the most common word read.

Then, add to this class a method rarity(self, word) that given a word, returns its rarity score (an

integer in the range [0; 5]).

We have provided ebooks of some classic texts from https://www.gutenberg.org/ in the Prac folder.

Use Freq to read the provided 1342-0.txt, 2600-0.txt and 98-0.txt into your hash table. At the end

of this process the table will store the number of times each word appears in the texts. We will consider

the resulting hash table as a reasonably accurate representation of the frequency of words in the English

language. Make sure you use the data you collected in the previous tasks to select an appropriate table

size for the selected text files. Also, modify __setitem__ to call rehash when the load of the hash table

is above 0.5, rather than when full (this will not be used if you choose an appropriate table size, but is

more realistic for cases where you do not know the number or size of the books being read).

Important: You must add your own tests for the add_file and rarity methods.

Task 7 2 marks

Add to the Freq class a method evaluate_frequency(self, other_filename) that returns a tuple

(common, uncommon, rare, errors) of the percentage of words appearing in other_filename which

have the corresponding rarity (with respect to the current frequency counts).

1This is known as Zipf’s law. You can read more at https://en.wikipedia.org/wiki/Zipf%27s_law

Use evaluate_frequency to compute the frequency of each rarity score in 84-0.txt, using the expected

frequencies you constructed in Task 6.

Example Plot

This is an example of the style of plot you should be generating:

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 205,132评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 87,802评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 151,566评论 0 338
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,858评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,867评论 5 368
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,695评论 1 282
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,064评论 3 399
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,705评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 42,915评论 1 300
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,677评论 2 323
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,796评论 1 333
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,432评论 4 322
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,041评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,992评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,223评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,185评论 2 352
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,535评论 2 343

推荐阅读更多精彩内容