three version are provided.

disjoint set, linked list version with weighted-union heuristic, rooted tree version with rank by union and path compression, and a minor but substantial optimization for path compression version FindSet to avoid redundancy so to be more efficient. (31 ms to 15 ms)

reference:

1. Thomas H. Cormen, Introduction to Algorithms

2. Disjoint-set Data Structures By vlad_D– TopCoder Member https://www.topcoder.com/community/data-science/data-science-tutorials/disjoint-set-data-structures/

in linked list version with weighted-union heuristic, with a extra tail member in struct myNode to speedup union, find is O(1), simply the p->head, so I remove find() and just used p->head as the find function.

(one main point) every time a list become no longer list, change its head’s num from 1 to 0, thus facilitate the afterwards count process – all node’s num is simply 0 except the ones as the head of linked lists. – similar process in the rooted tree version.

// linked list version with weighted-union heuristic, 15 ms

#include <cstdio>
#include <cstring>
#include <algorithm> #define MAXSIZE 1005
struct myNode {
int num;
myNode *head;
myNode *next;
myNode *tail;
}; void MergeSet(myNode *p1, myNode *p2) {
p1=p1->head, p2=p2->head;
if(p1->num<p2->num) { std::swap(p1,p2); }
p1->num+=p2->num, p2->num=0;
p1->tail->next=p2, p1->tail=p2->tail;
for(;p2;p2=p2->next) { p2->head=p1; }
} int main() {
#ifndef ONLINE_JUDGE
freopen("in.txt","r",stdin);
#endif
int T, n,m,u,v,i,cnt;
myNode cities[MAXSIZE], *p,*pend, *q;
while(scanf("%d%d",&n,&m)==2 && n>0) {
for(p=&cities[1],pend=p+n;p!=pend;++p) { p->num=1; p->head=p; p->next=0; p->tail=p; }
for(i=0;i<m;++i) {
scanf("%d%d",&u,&v);
if(cities[u].head!=cities[v].head) { MergeSet(&cities[u],&cities[v]); }
}
for(cnt=-1, p=&cities[1],pend=p+n;p!=pend;++p) {
if(p->num>0) ++cnt;
}
printf("%d\n",cnt);
}
return 0;
}

// rooted tree version, with rank by union and path compression, 31 ms

#include <cstdio>
#include <cstring>
#include <algorithm> #define MAXSIZE 1005
struct myNode {
int rank;
myNode *parent;
}; myNode* FindSet(myNode *p1) {
if(p1->parent==p1) return p1;
return p1->parent=FindSet(p1->parent);
} void Link(myNode *p1, myNode *p2) {
if(p1->rank<p2->rank) std::swap(p1,p2);
p2->parent=p1;
p2->rank=0;
if(p1->rank==p2->rank) ++p1->rank;
} void MergeSet(myNode *p1, myNode *p2) {
Link(FindSet(p1),FindSet(p2));
} int main() {
#ifndef ONLINE_JUDGE
freopen("in.txt","r",stdin);
#endif
int T, n,m,u,v,i,cnt;
myNode cities[MAXSIZE], *p,*pend, *q;
while(scanf("%d%d",&n,&m)==2 && n>0) {
for(p=&cities[1],pend=p+n;p!=pend;++p) { p->rank=1; p->parent=p; }
for(i=0;i<m;++i) {
scanf("%d%d",&u,&v);
if(FindSet(&cities[u])!=FindSet(&cities[v]))
MergeSet(&cities[u],&cities[v]);
}
for(cnt=-1, p=&cities[1],pend=p+n;p!=pend;++p) {
if(p->rank>0) ++cnt;
}
printf("%d\n",cnt);
}
return 0;
}

note that in version 2, the path compression FindSet is a two pass recursive function, first pass up to find parent and then pass down return it to update p->parent, thus achieve path compression.

(another main point) But, note that, even when p->parent is the representative, there is no need to update p->parent, FindSet still obliviously call FindSet(p->parent) and assign it to p->parent, which does nothing useful. We can remove this redundancy by a simple modification, in FindSet, replace

if(p1->parent==p1) return p1;

with

if(p1->parent==p1->parent->parent) return p1->parent;

// almost same with version 2, with the optimization just mentioned, 15 ms

#include <cstdio>
#include <cstring>
#include <algorithm> #define MAXSIZE 1005
struct myNode {
int rank;
myNode *parent;
}; myNode* FindSet(myNode *p1) {
if(p1->parent==p1->parent->parent) return p1->parent;
return p1->parent=FindSet(p1->parent);
} void Link(myNode *p1, myNode *p2) {
if(p1->rank<p2->rank) std::swap(p1,p2);
p2->parent=p1;
p2->rank=0;
if(p1->rank==p2->rank) ++p1->rank;
} void MergeSet(myNode *p1, myNode *p2) {
Link(FindSet(p1),FindSet(p2));
} int main() {
#ifndef ONLINE_JUDGE
freopen("in.txt","r",stdin);
#endif
int T, n,m,u,v,i,cnt;
myNode cities[MAXSIZE], *p,*pend, *q;
while(scanf("%d%d",&n,&m)==2 && n>0) {
for(p=&cities[1],pend=p+n;p!=pend;++p) { p->rank=1; p->parent=p; }
for(i=0;i<m;++i) {
scanf("%d%d",&u,&v);
if(FindSet(&cities[u])!=FindSet(&cities[v]))
MergeSet(&cities[u],&cities[v]);
}
for(cnt=-1, p=&cities[1],pend=p+n;p!=pend;++p) {
if(p->rank>0) ++cnt;
}
printf("%d\n",cnt);
}
return 0;
}

版权声明:本文为博主原创文章,未经博主允许不得转载。// p.s. If in any way improment can be achieved, better performance or whatever, it will be well-appreciated to let me know, thanks in advance.

hdu 1232, disjoint set, linked list vs. rooted tree, a minor but substantial optimization for path c 分类: hdoj 2015-07-16 17:13 116人阅读 评论(0) 收藏的更多相关文章

  1. Hdu 1429 胜利大逃亡(续) 分类: Brush Mode 2014-08-07 17:01 92人阅读 评论(0) 收藏

    胜利大逃亡(续) Time Limit : 4000/2000ms (Java/Other)   Memory Limit : 65536/32768K (Java/Other) Total Subm ...

  2. hdu 1031 (partial sort problem, nth_element, stable_partition, lambda expression) 分类: hdoj 2015-06-15 17:47 26人阅读 评论(0) 收藏

    partial sort. first use std::nth_element to find pivot, then use std::stable_partition with the pivo ...

  3. HDU 1532 Drainage Ditches 分类: Brush Mode 2014-07-31 10:38 82人阅读 评论(0) 收藏

    Drainage Ditches Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 65536/32768 K (Java/Others) ...

  4. Hdu 1507 Uncle Tom's Inherited Land* 分类: Brush Mode 2014-07-30 09:28 112人阅读 评论(0) 收藏

    Uncle Tom's Inherited Land* Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 65536/32768 K (J ...

  5. [leetcode] Reverse Linked List 分类: leetcode 算法 2015-07-09 18:44 2人阅读 评论(0) 收藏

    反转链表:比较简单的问题,可以遍历也可以递归. # Definition for singly-linked list. class ListNode: def __init__(self, x): ...

  6. one recursive approach for 3, hdu 1016 (with an improved version) , permutations, N-Queens puzzle 分类: hdoj 2015-07-19 16:49 86人阅读 评论(0) 收藏

    one recursive approach to solve hdu 1016, list all permutations, solve N-Queens puzzle. reference: t ...

  7. leetcode N-Queens/N-Queens II, backtracking, hdu 2553 count N-Queens, dfs 分类: leetcode hdoj 2015-07-09 02:07 102人阅读 评论(0) 收藏

    for the backtracking part, thanks to the video of stanford cs106b lecture 10 by Julie Zelenski for t ...

  8. hdu 1053 (huffman coding, greedy algorithm, std::partition, std::priority_queue ) 分类: hdoj 2015-06-18 19:11 22人阅读 评论(0) 收藏

    huffman coding, greedy algorithm. std::priority_queue, std::partition, when i use the three commente ...

  9. hdu 1052 (greedy algorithm) 分类: hdoj 2015-06-18 16:49 35人阅读 评论(0) 收藏

    thanks to http://acm.hdu.edu.cn/discuss/problem/post/reply.php?action=support&postid=19638&m ...

随机推荐

  1. 数据结构代码整理(线性表,栈,队列,串,二叉树,图的建立和遍历stl,最小生成树prim算法)。。持续更新中。。。

    //归并排序递归方法实现 #include <iostream> #include <cstdio> using namespace std; #define maxn 100 ...

  2. kegg-kass注释--转载

    在注释KEGG的时候,一直用到kaas,具体kaas是个什么东东,简单的总结一下吧.     KEGG是由日本人搞的一个代谢图,收录基因和基因组的数据库,数据库可以分为 3大部分,基因数据库, 化学分 ...

  3. lucene 索引 demo

    核心util /** * Alipay.com Inc. * Copyright (c) 2004-2015 All Rights Reserved/ */ package com.lucene.de ...

  4. 内存屏障 & Memory barrier

    Memory Barrier http://www.wowotech.net/kernel_synchronization/memory-barrier.html 这里面讲了Memory Barrie ...

  5. Android 数据存储五种方式

    1.概述 Android提供了5种方式来让用户保存持久化应用程序数据.根据自己的需求来做选择,比如数据是否是应用程序私有的,是否能被其他程序访问,需要多少数据存储空间等,分别是: ① 使用Shared ...

  6. iOS使用Security.framework进行RSA 加密解密签名和验证签名

    iOS 上 Security.framework为我们提供了安全方面相关的api: Security框架提供的RSA在iOS上使用的一些小结 支持的RSA keySize 大小有:512,768,10 ...

  7. javascript的假查询

    1. function select(){ var text=$("#ss").val();//获得关键字 $("#show_tab tr").hide().f ...

  8. 多线程环境的UI控件属性更新

    Winform: public delegate void UpadataTextCallBack(string str,TextBox text); public void UpadtaText(s ...

  9. D​e​p​l​o​y​m​e​n​t​ ​f​a​i​l​u​r​e​ ​o​n​ ​T​o​m​c​a​t​ ​6​.​x​.​ ​C​o​u​l​d​ ​n​o​t​ ​c​o​p​y​ ​a​l​l​ ​r​e​s​o​u​r​c​e​s​ ​t​o

    在myeclipse总部署项目,一直有问题,提示如下的错误,经过研究在网上需求帮助,解决方案如下: Deployment failure on Tomcat  6.x. Could not copy  ...

  10. 解决因为使用了官方xbean-2.4.0.jar 的库造成的性能问题

    最近我们游戏经常收到玩家投诉卡进度条的问题.而且后台显示执行队列和CPU使用率异常高 根据调用的JDB分析出 使用xbean 时候会调用以下代码 在设置xmlobject 时候会有一个 GlobalL ...