Design and Analysis of Algorithms_Decrease-and-Conquer
I collect and make up this pseudocode from the book:
<<Introduction to the Design and Analysis of Algorithms_Second Edition>> _ Anany Levitin
Note that throughout the paper, we assume that inputs to algorithms fall within their specified ranges and hence require no verfication. When implementing algorithms as programs to be used in actual applications, you should provide such verfications.
About pseudocode: For the sake of simplicity, we omit declarations of variables and use indentation to show the scope of such statements as for, if and while. As you saw later, we use an arrow <- for the assignment operation and two slashes // for comments.
Algorithm InsertionSort(A[..n-])
// Sorts a given array by insertion sort
// Input: An array A[0..n-1] of n orderable elements
// Output: Array A[0..n-1] sorted in nondecreasing order
for i <- to n- do
v <- A[i]
j <- i-
while j ≥ and A[j] > v do
A[j+] <- A[j]
j <- j-
A[j+] <- v Consider the following version of insertion sort
Algorithm InsertionSort2(A[..n-])
for i <- to n- do
j <- j-
while j ≥ and A[j] > A[j+] do
swap(A[j], A[j+])
j <- j-
What is its time efficiency? How is it compared to that of the version given in the above?
The efficiency classes of both versions will be the same. The Inner loop of InsertionSort consists of one key assignment and one index decrement; the inner loop of InsertionSort2 consists of one key swap(i.e., three key assignments) and one index decrement. If we disregard the time spend on the index decrements, the ratio of the running times should be estimated as 3Ca/Ca = 3; if we take into account the time spent on the index decrements, the ratio's estimate becones (3Ca+Cd)/(Ca+Cd), where Ca and Cd are the times of one key assignment and one index decrement, respectively.
Algorithm DFS(G)
// Implements a depth-first search traversal of a given graph
// Input: Graph G = <V, E>
// Output: Graph G with its vertices marked with consecutive integers
// in the order they've been first encountered by the DFS traversal
make each vertex in V with 0 as a mark of being "unvisited"
count <-
for each vertex v in V do
if v is marked with
dfs(v) dfs(v)
// visits recursively all the unvisited vertices connected to vertex v by a path
// and numbers them in order they are encountered via global variable count
count <- count+; mark v with count
for each vertex w in V adjacent to v do
if w is marked with
dfs(w)
Algorithm BFS(G)
// Implements a breadth-first search traversal of a given graph
// Input: Graph G = <V, E>
// Output: Graph G with its vertices marked with consecutive integers
// in the order they have been visited by the BFS traversal
mark each vetex in V with 0 as a mark of being "unvisited"
count <-
for each vertex v in V do
if v is marked with
bfs(v) bfs(v)
// visits all the unvisited vertices connected to vertex v by a path and assigns them
// the numbers in the order they are visited via global variable count
count <- count+; mark v with count and initialize a queue with v by a path
while the queue is not empty do
for each vertex w in V adjacent to the front vertex do
if w is marked with
count <- count+; mark w with count
add w to the queue
remove the front vertex from the queue
Algorithm JohnsonTrotter(n)
// Implements Johnson-Trotter algorithm for generating permutations
// Input: A positive integer n
// Output: A list of all permutations of {1, ..., n}
initialize the first permutation with ←1 ←2... ←n
while the last permutation has a mobile element do
find its largest mobile element k
swap k and the adjacent integer k's arrow points to
reverse the direction of all the elements that are larger than k
add the new permutation to the list Here is an application of this algorithm for n = 3, with the largest mobile integer shown in underline:
←1←2←3 ←1←3←2 ←3←1←2 →3←2←1 ←2→3←1 ←2←1→3
Consider the following implementation of the algorithm for generating permutations discovered by B.Heap.
Algorithm HeapPermute(n)
// Implements Heap's algorithm for generating permutations
// Input: A positive integer n and a global array A[1..n]
// Output: All permutations of elements of A
if n =
write A
else
for i <- to n do
HeapPermute(n-)
if n is odd
swap A[] and A[n]
else
swap A[i] and A[n] Trace the algorithm by hand for n = 2, 3, and 4.
For n = 2:
12 21
For n = 3 (read along the rows):
123 213
312 132
231 321
For n = 4 (read along the rows):
1234 2134 3124 1324 2314 3214
4231 2431 3421 4321 2341 3241
4132 1432 3412 4312 1342 3142
4123 1423 2413 4213 1243 2143
Write a pseudocode for a recursive algorithm for generating all ^n bit strings of length n.
Algorithm BitstringsRec(n)
// Generates recursively all the bit strings of a given lengt
// Input: A positive integer n
// Output: All bit strings of length n as contents of global array B[0..n-1]
if n =
print(B)
else
B[n-] <- ; BitstringsRec(n-)
B[n-] <- ; BitstirngsRec(n-) Write a nonrecursive algorithm for generating all ^n bit strigns of length n that implements bit strigns as arrays and does not use binary additions.
Algorithm BitstringsNonrec(n)
// Generates nonrecursively all the bit strings of a given length
// Input: A positive integer n
// Output: All bit strings of length n as contents of global array B[0..n-1]
for i <- to n- do
B[i] =
repeat
print(B)
k <- n-
while k ≥ and B[k] =
k <- k-
if k ≥
B[k] <-
for i <- k+ to n- do
B[i] <-
until k = - Design a decrease-and-conquer algorithm for generating all combinations for k items chosen form n, i.e., all k-element subsets of a given n-element set.
There are several decrease-and-conquer algorithms for this problem. They are more subtle than one might expect. Generating combinations in a predefined order(increasing, decreasing, lexicographic) helps with both a design and a correctness proof. The following simple property is very helpful. Assuming with no loss of generality that the underlying set is {1, 2, ..., n}, there are (n-1k-1) k-subsets whose smallest elements is i, i = 1, 2, ..., n-k+1.
Here is a recursive algorithm from "Problems on Algorithms" by Ian Par-berry. call Choose(1, k) where
Algorithm Choose(i, k)
// Generates all k-subsets of {i, i+1, ..., n} stored in global array A[1..k]
// in descending order of their components
if k =
print(A)
else
for j <- i to n-k+ do
A[k] <- j
Choose(j+, k-)
Write a pseudocode for the divide-into-three algorithm for the fake-coin problem.(Make sure that your algorithm handles properly all values of n, not only those that are multiples of 3; We assume that the fake coin is lighter)
If n is multiply of 3(i.e., n mod 3 = 0), we can divide the coins into three piles of n/3 coins each and weigh two of the piles. If n = 3k+1(i.e., n mod 3 = 1), we can divide the coins into the piles of sizes k, k and k+1, or k+1, k+1 and k-1.(We will use the second option.) Finally, if n = 3k+2(i.e., n mod 3 = 2), we will divide the coins into the piles of sizes k+1, k+1 and k. The following pseudocode assumes that there is exactly one fake coin among the coins given and that the fake coin is lighter than the other coins.
if n = the coin is fake
else divide the coins into three piles of ...coins ; mark
weigh the first two piles
if they weight the same and continue with the coins of the third pile
else continue with the lighter of the first two piles There has a very natural question. For large values of n, about how many times faster is this algorithm than the one based on dividing coins into two piles?
The ratio of the number of weighings in the worst case can be approximated for large values of n by
log2n/log3n = log2n/(log32log2n) = log23 ≈ 1.6. Write a pseudocode for the multiplication à la russe algorithm.
Algorithm Russe(n, m)
// Implements multiplication à la russe nonrecursively
// Input: Two positive integer n and m
// Output: The product of n and m
p <-
while n ≠ do
if n mod = p <- p+m
n <- ⌊n/2⌋n/
m <- *m
return p+m Algorithm RusseRec(n, m)
// Implements multiplication à la russe recursively
// Input: Two positive integer n and m
// Output: The product of n and m
if n mod = return RusseRec(n/, 2m)
else if n = return m
else return RusseRec((n-)/, 2m) + m
Write a pseudocode for a nonrecursive implementation of the partition-based algorithm for the selection problem.
Algorithm Selection(A[..n-], k)
// Solves the selection problem by partition-based algortihm
// Input: An array A[0..n-1] of orderable elements and integer k(1 ≤ k ≤ n)
// Output: The value of the k-th smallest element in A[0..n-1]
l <- ; r <- n-
A[n] <- ∞ // append sentinel
while l ≤ r do
p <- A[l] // the pivot
i <- l; j <- r+
repeat
repeat i <- i+ until A[i] ≥ p
repeat j <- j- until A[j] ≤ p do
swap(A[i], A[j])
until i ≥ j
swap(A[i], A[j]) undo last swap
swap(A[l], A[j]) partition
if j > k- r <- j-
else if j < k- l <- j+
else return A[k-] Write a pseudocode for a recursive implementation of the algorithm.
call SelectionRec(A[0..n-1], k) where
Algorithm SelectionRec(A[l..r], k)
// Solves the selection problem by recursive partition-based algorithm
// Input: A subarray A[l..r] of orderable elements and integer k(1 ≤ k ≤ r-l+1)
// Output: The value of the k-th smallest element in A[l..r]
s <- Partition(A[l..r])
if s > l+k- SelectionRec(A[l..s-], k)
else if s < l+k- SelectionRec(A[s+..r], k--s)
else return A[s] The following algorithm for to compute the partition position:
Algorithm Partition(A[l..r])
// Partitions a subarray by using its first element as a pivot
// Input: A subarray A[l..r] of A[0..n-1], defined by its left and right indices l and r(l < r)
// Output: A partition of A[l..r], with the split position returned as this function's value
p <- A[l]
i <- l; j <- r+
repeat
repeat i <- i+ until A[i] ≥ p
repeat j <- j- until A[j] ≤ p
swap(A[i], A[j])
until i ≥ j
swap(A[i], A[j]) // undo last swap when i ≥ j
swap(A[l], A[j])
return j
new words:
disregard: 忽视 estimate: 估计 adjacent: 邻接的
permutation: 排列 lexicographic: 字典序 fake: 假货
multiple: 倍数 product: 产品;[数]乘积 (END_XPJIANG.)
Design and Analysis of Algorithms_Decrease-and-Conquer的更多相关文章
- Design and Analysis of Algorithms_Divide-and-Conquer
I collect and make up this pseudocode from the book: <<Introduction to the Design and Analysis ...
- Design and Analysis of Algorithms_Brute Froce
I collect and make up this pseudocode from the book: <<Introduction to the Design and Analysis ...
- Design and Analysis of Algorithms_Fundamentals of the Analysis of Algorithm Efficiency
I collect and make up this pseudocode from the book: <<Introduction to the Design and Analysis ...
- Design and Analysis of Algorithms_Introduction
I collect and make up this pseudocode from the book: <<Introduction to the Design and Analysis ...
- 6.046 Design and Analysis of Algorithms
课程信息 6.046 Design and Analysis of Algorithms
- 斯坦福大学公开课机器学习: machine learning system design | error analysis(误差分析:检验算法是否有高偏差和高方差)
误差分析可以更系统地做出决定.如果你准备研究机器学习的东西或者构造机器学习应用程序,最好的实践方法不是建立一个非常复杂的系统.拥有多么复杂的变量,而是构建一个简单的算法.这样你可以很快地实现它.研究机 ...
- Algorithms: Design and Analysis, Part 1 - Programming Assignment #1
自我总结: 1.编程的思维不够,虽然分析有哪些需要的函数,但是不能比较好的汇总整合 2.写代码能力,容易挫败感,经常有bug,很烦心,耐心不够好 题目: In this programming ass ...
- Algorithms: Design and Analysis, Part 1 - Problem Set 1 - Question 5
最后一个图像,用画图软件绘制了一下,自己的直接主观判断还是有些小问题的 注意:最后的灰色的线条会超过橙色的线条
- EE就业最好的方向是转CS,其次是VLSI/ASIC DESIGN & VERIFICATION
Warald在2012年写过一篇文章<EE现在最好就业的方向是VLSI/ASIC DESIGN VERIFICATION>,三年过去了,很多学电子工程的同学想知道现在形势如何. 首先,按照 ...
随机推荐
- 简单的后台json,前台解析 操作
后台: List<PageData> KeyWords=plantDefDetailCSAService.findKeyWords(pd); JSONArray array = new J ...
- 关于php的一些小知识!
浏览目录: 一.PHP的背景和优势: 二.PHP原理简介: 三.PHP运行环境配置: 四.编写简单的PHP代码以及测试. 一.PHP的背景和优势 1.1 什么是PHP? PHP是能让你生成动态 ...
- Maven 的插件和生命周期的绑定
一.Maven 的生命周期 Maven 的生命周期是对所有的构建过程进行抽象和统一.Maven 的生命周期是抽象的,这意味着生命周期本身不做任何实际的工作,生命周期只是定义了一系列的阶段,并确定这些阶 ...
- 用介个新的blog咯..
之前csdn实在是太卡了.. 只要一写比较长的blog就卡的要死.. 转过来这吧,比较好吧.. 原blog地址 啊为啥域名叫darklove呢.. 这是很久之前创建的.. 简单来说是一个和clearl ...
- Web用户的身份验证及WebApi权限验证流程的设计和实现
5. WebApi 服务端代码示例 5.1 控制器基类ApiControllerBase [csharp] view plaincopy /// /// Controller的基类,用于实现适合业 ...
- Codeforces Round #371 (Div. 2) C
传送门 map或者字典数的应用 简单题 题意: 思路: AC代码: #include<iostream> #include<cstring> #include<cmat ...
- wordpress 安装 "Table Prefix" must not be empty.
时隔一年了,一年没有写代码了.又重拾代码,心情无法言表啊.互联网还是有机会的. 安装wordpress怎么装 setp2了就 报 "Table Prefix" must not b ...
- 实战java虚拟机的学习计划图(看懂java虚拟机)
啥也不说了,实战java虚拟机,好好学习,天天向上!针对自己的软肋制定学习计划. 一部分内容看完,自己做的学习笔记和感想. 学java很简单,但懂java会有难度,如果你的工资还没超过1W,那是时候深 ...
- Cannot instantiate the type AppiumDriver
I have added following jars in my projects build path: java-client-2.0.0 from http://appium.io/downl ...
- Ubuntu上安装Karma失败对策
在Ubuntu上安装Karma遇到超时 timeout 错误.Google了一下,国外的码农给了一个快捷的解决方案,实测可行,贴在这里: sudo apt-get install npm nodejs ...