Building a Search Engine(week 2&3)

Search Engine Architecture

  • Web Crawling

  • Index Building

  • Searching

Web Crawler

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches.

steps

  1. Retrieve a page
  2. Look through the page for links
  3. Add the links to a list of "to be retrieved" sites
  4. repeat...

policy

  • selection policy that states which page to download
  • re-visit policy that states when to.check for changes to the pages
  • politeness policy that states how to avoid overloading Web sites
  • parallelization policy that states how to coordinate distributed Web crawlers

robots.txt

  • A way for a web site to communicate with web crawlers

  • An informal and voluntary standard

  • It tells the crawler where to look and where not to look

Search Indexing

Search engine indexing collects, parses, and stores data to facilitate fast and accurate information retrieval. The purpose of storing an index is to optimize speed and performance in finding relevant documents for a search query. Without an index, the search engine would scan every document in the corpus, which would require considerable time and computing power.

code segment

spider.py

import sqlite3
import urllib.error
import ssl
from urllib.parse import urljoin
from urllib.parse import urlparse
from urllib.request import urlopen
from bs4 import BeautifulSoup # Ignore SSL certificate errors
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE # Link to sqlite
conn = sqlite3.connect('spider.sqlite')
cur = conn.cursor() # Create new tables
cur.execute('''CREATE TABLE IF NOT EXISTS Pages
(id INTEGER PRIMARY KEY, url TEXT UNIQUE, html TEXT,
error INTEGER, old_rank REAL, new_rank REAL)''') cur.execute('''CREATE TABLE IF NOT EXISTS Links
(from_id INTEGER, to_id INTEGER)''') #This table store only one url which is processing
cur.execute('''CREATE TABLE IF NOT EXISTS Webs (url TEXT UNIQUE)''') # Check to see if we are already in progress...
cur.execute('SELECT id,url FROM Pages WHERE html is NULL and error is NULL ORDER BY RANDOM() LIMIT 1')
row = cur.fetchone()
if row is not None:
print("Restarting existing crawl. Remove spider.sqlite to start a fresh crawl.")
else :
starturl = input('Enter web url or enter: ')
if ( len(starturl) < 1 ) : starturl = 'http://www.dr-chuck.com/'
# delete the "/"
if ( starturl.endswith('/') ) : starturl = starturl[:-1]
web = starturl
if ( starturl.endswith('.htm') or starturl.endswith('.html') ) :
pos = starturl.rfind('/')
web = starturl[:pos] if ( len(web) > 1 ) :
cur.execute('INSERT OR IGNORE INTO Webs (url) VALUES ( ? )', ( web, ) )
cur.execute('INSERT OR IGNORE INTO Pages (url, html, new_rank) VALUES ( ?, NULL, 1.0 )', ( starturl, ) )
conn.commit() # Get the current webs
cur.execute('''SELECT url FROM Webs''')
webs = list()
for row in cur:
webs.append(str(row[0])) print(webs) many = 0
while True:
if ( many < 1 ) :
sval = input('How many pages:')
if ( len(sval) < 1 ) : break
many = int(sval)
many = many - 1 cur.execute('SELECT id,url FROM Pages WHERE html is NULL and error is NULL ORDER BY RANDOM() LIMIT 1')
try:
row = cur.fetchone()
# print row
fromid = row[0]
url = row[1]
except:
print('No unretrieved HTML pages found')
many = 0
break print(fromid, url, end=' ') # If we are retrieving this page, there should be no links from it
cur.execute('DELETE from Links WHERE from_id=?', (fromid, ) )
try:
document = urlopen(url, context=ctx) html = document.read()
if document.getcode() != 200 :
print("Error on page: ",document.getcode())
cur.execute('UPDATE Pages SET error=? WHERE url=?', (document.getcode(), url) ) if 'text/html' != document.info().get_content_type() :
print("Ignore non text/html page")
cur.execute('DELETE FROM Pages WHERE url=?', ( url, ) )
conn.commit()
continue print('('+str(len(html))+')', end=' ') soup = BeautifulSoup(html, "html.parser")
except KeyboardInterrupt:
print('')
print('Program interrupted by user...')
break
except:
print("Unable to retrieve or parse page")
cur.execute('UPDATE Pages SET error=-1 WHERE url=?', (url, ) )
conn.commit()
continue cur.execute('INSERT OR IGNORE INTO Pages (url, html, new_rank) VALUES ( ?, NULL, 1.0 )', ( url, ) )
cur.execute('UPDATE Pages SET html=? WHERE url=?', (memoryview(html), url ) )
conn.commit() # Retrieve all of the anchor tags
tags = soup('a')
count = 0
for tag in tags:
href = tag.get('href', None)
if ( href is None ) : continue
# Resolve relative references like href="/contact"
up = urlparse(href)
if ( len(up.scheme) < 1 ) :
href = urljoin(url, href)
ipos = href.find('#')
if ( ipos > 1 ) : href = href[:ipos]
if ( href.endswith('.png') or href.endswith('.jpg') or href.endswith('.gif') ) : continue
if ( href.endswith('/') ) : href = href[:-1]
# print href
if ( len(href) < 1 ) : continue # Check if the URL is in any of the webs
found = False
for web in webs:
if ( href.startswith(web) ) :
found = True
break
if not found : continue cur.execute('INSERT OR IGNORE INTO Pages (url, html, new_rank) VALUES ( ?, NULL, 1.0 )', ( href, ) )
count = count + 1
conn.commit() cur.execute('SELECT id FROM Pages WHERE url=? LIMIT 1', ( href, ))
try:
row = cur.fetchone()
toid = row[0]
except:
print('Could not retrieve id')
continue
# print fromid, toid
cur.execute('INSERT OR IGNORE INTO Links (from_id, to_id) VALUES ( ?, ? )', ( fromid, toid ) ) print(count) cur.close()

sprank.py

import sqlite3

conn = sqlite3.connect('spider.sqlite')
cur = conn.cursor() # Find the ids that send out page rank - we only are interested
# in pages in the SCC that have in and out links
cur.execute('''SELECT DISTINCT from_id FROM Links''')
from_ids = list()
for row in cur:
from_ids.append(row[0]) # Find the ids that receive page rank
to_ids = list()
links = list()
cur.execute('''SELECT DISTINCT from_id, to_id FROM Links''')
for row in cur:
from_id = row[0]
to_id = row[1]
if from_id == to_id : continue
if from_id not in from_ids : continue
if to_id not in from_ids : continue
links.append(row)
if to_id not in to_ids : to_ids.append(to_id) # Get latest page ranks for strongly connected component
prev_ranks = dict()
for node in from_ids:
cur.execute('''SELECT new_rank FROM Pages WHERE id = ?''', (node, ))
row = cur.fetchone()
prev_ranks[node] = row[0] sval = input('How many iterations:')
many = 1
if ( len(sval) > 0 ) : many = int(sval) # Sanity check
if len(prev_ranks) < 1 :
print("Nothing to page rank. Check data.")
quit() # Lets do Page Rank in memory so it is really fast
for i in range(many):
# print prev_ranks.items()[:5]
next_ranks = dict();
total = 0.0
for (node, old_rank) in list(prev_ranks.items()):
total = total + old_rank
next_ranks[node] = 0.0
# print total # Find the number of outbound links and sent the page rank down each
for (node, old_rank) in list(prev_ranks.items()):
# print node, old_rank
give_ids = list()
for (from_id, to_id) in links:
if from_id != node : continue
# print ' ',from_id,to_id if to_id not in to_ids: continue
give_ids.append(to_id)
if ( len(give_ids) < 1 ) : continue
amount = old_rank / len(give_ids)
# print node, old_rank,amount, give_ids for id in give_ids:
next_ranks[id] = next_ranks[id] + amount newtot = 0
for (node, next_rank) in list(next_ranks.items()):
newtot = newtot + next_rank
evap = (total - newtot) / len(next_ranks) # print newtot, evap
for node in next_ranks:
next_ranks[node] = next_ranks[node] + evap newtot = 0
for (node, next_rank) in list(next_ranks.items()):
newtot = newtot + next_rank # Compute the per-page average change from old rank to new rank
# As indication of convergence of the algorithm
totdiff = 0
for (node, old_rank) in list(prev_ranks.items()):
new_rank = next_ranks[node]
diff = abs(old_rank-new_rank)
totdiff = totdiff + diff avediff = totdiff / len(prev_ranks)
print(i+1, avediff) # rotate
prev_ranks = next_ranks # Put the final ranks back into the database
print(list(next_ranks.items())[:5])
cur.execute('''UPDATE Pages SET old_rank=new_rank''')
for (id, new_rank) in list(next_ranks.items()) :
cur.execute('''UPDATE Pages SET new_rank=? WHERE id=?''', (new_rank, id))
conn.commit()
cur.close()

spdump.py

import sqlite3

conn = sqlite3.connect('spider.sqlite')
cur = conn.cursor() cur.execute('''SELECT COUNT(from_id) AS inbound, old_rank, new_rank, id, url
FROM Pages JOIN Links ON Pages.id = Links.to_id
WHERE html IS NOT NULL
GROUP BY id ORDER BY inbound DESC''') count = 0
for row in cur :
if count < 50 : print(row)
count = count + 1
print(count, 'rows.')
cur.close()

spjson.py

import sqlite3

conn = sqlite3.connect('spider.sqlite')
cur = conn.cursor() print("Creating JSON output on spider.js...")
howmany = int(input("How many nodes? ")) cur.execute('''SELECT COUNT(from_id) AS inbound, old_rank, new_rank, id, url
FROM Pages JOIN Links ON Pages.id = Links.to_id
WHERE html IS NOT NULL AND ERROR IS NULL
GROUP BY id ORDER BY id,inbound''') fhand = open('spider.js','w')
nodes = list()
maxrank = None
minrank = None
for row in cur :
nodes.append(row)
rank = row[2]
if maxrank is None or maxrank < rank: maxrank = rank
if minrank is None or minrank > rank : minrank = rank
if len(nodes) > howmany : break if maxrank == minrank or maxrank is None or minrank is None:
print("Error - please run sprank.py to compute page rank")
quit() fhand.write('spiderJson = {"nodes":[\n')
count = 0
map = dict()
ranks = dict()
for row in nodes :
if count > 0 : fhand.write(',\n')
# print row
rank = row[2]
rank = 19 * ( (rank - minrank) / (maxrank - minrank) )
fhand.write('{'+'"weight":'+str(row[0])+',"rank":'+str(rank)+',')
fhand.write(' "id":'+str(row[3])+', "url":"'+row[4]+'"}')
map[row[3]] = count
ranks[row[3]] = rank
count = count + 1
fhand.write('],\n') cur.execute('''SELECT DISTINCT from_id, to_id FROM Links''')
fhand.write('"links":[\n') count = 0
for row in cur :
# print row
if row[0] not in map or row[1] not in map : continue
if count > 0 : fhand.write(',\n')
rank = ranks[row[0]]
srank = 19 * ( (rank - minrank) / (maxrank - minrank) )
fhand.write('{"source":'+str(map[row[0]])+',"target":'+str(map[row[1]])+',"value":3}')
count = count + 1
fhand.write(']};')
fhand.close()
cur.close() print("Open force.html in a browser to view the visualization")

Coursera课程笔记----P4E.Capstone----Week 2&3的更多相关文章

  1. Coursera课程笔记----P4E.Capstone----Week 6&7

    Visualizing Email Data(Week 6&7) code segment gword.py import sqlite3 import time import zlib im ...

  2. Coursera课程笔记----P4E.Capstone----Week 4&5

    Spidering and Modeling Email Data(week4&5) Mailing List - Gmane Crawl the archive of a mailing l ...

  3. 操作系统学习笔记----进程/线程模型----Coursera课程笔记

    操作系统学习笔记----进程/线程模型----Coursera课程笔记 进程/线程模型 0. 概述 0.1 进程模型 多道程序设计 进程的概念.进程控制块 进程状态及转换.进程队列 进程控制----进 ...

  4. Coursera课程笔记----C++程序设计----Week3

    类和对象(Week 3) 内联成员函数和重载成员函数 内联成员函数 inline + 成员函数 整个函数题出现在类定义内部 class B{ inline void func1(); //方式1 vo ...

  5. Coursera课程笔记----Write Professional Emails in English----Week 3

    Introduction and Announcement Emails (Week 3) Overview of Introduction & Announcement Emails Bas ...

  6. Coursera课程笔记----Write Professional Emails in English----Week 1

    Get to Know Basic Email Writing Structures(Week 1) Introduction to Course Email and Editing Basics S ...

  7. Coursera课程笔记----C程序设计进阶----Week 5

    指针(二) (Week 5) 字符串与指针 指向数组的指针 int a[10]; int *p; p = a; 指向字符串的指针 指向字符串的指针变量 char a[10]; char *p; p = ...

  8. Coursera课程笔记----Write Professional Emails in English----Week 5

    Culture Matters(Week 5) High/Low Context Communication High Context Communication The Middle East, A ...

  9. Coursera课程笔记----Write Professional Emails in English----Week 4

    Request and Apology Emails(Week 4) How to Write Request Emails Write more POLITELY & SINCERELUY ...

随机推荐

  1. Julia基础语法复数和分数

     1.复数   2.分数

  2. S - Primitive Primes CodeForces - 1316C 数学

    数学题 在f(x)和g(x)的系数里找到第一个不是p的倍数的数,然后相加就是答案 为什么? 设x1为f(x)中第一个不是p的倍数的系数,x2为g(x)...... x1+x2前的系数为(a[x1+x2 ...

  3. Python爬取养眼图片

    1.准备 各位绅士们,你可能会觉得疫情在家无聊,那么现在我们的Python语言可以满足你们的需求.项目需要的工具(1)Python3(2)requests库requests库可以通过代码pip ins ...

  4. .NET Core3.1总体预览和第一个Core程序的创建

    小伙伴们大家好!欢迎阅读本贴,这里是常哥说编程的专栏,.NetCore已经出来一段时间了,很多小伙伴可能也开始了学习,但是.NetCore毕竟在学习上和我们常用的.NET Framework还是有很大 ...

  5. [Laravel] 自带分页实现以及links方法不存在错误

    自带分页实现其实挺简单的,但是我在实现的时候报错!找了很久才找出原因! 废话不说上码 控制器LeeController.php层 <?php namespace App\Http\control ...

  6. React Hooks: useCallback理解

    useCallback把匿名回调“存”起来 避免在component render时候声明匿名方法,因为这些匿名方法会被反复重新声明而无法被多次利用,然后容易造成component反复不必要的渲染. ...

  7. BMP图片解析

    本博客参考:https://www.cnblogs.com/l2rf/p/5643352.html 一.简介 BMP(Bitmap-File)图形文件是Windows采用的图形文件格式,在Window ...

  8. 大数据Hbase相关运维题

    1.启动先电大数据平台的 Hbase 数据库,其中要求使用 master 节点的RegionServer.在 Linux Shell 中启动 Hbase shell,查看 HBase 的版本信息.(相 ...

  9. numpy+sklearn 手动实现逻辑回归【Python】

    逻辑回归损失函数: from sklearn.datasets import load_iris,make_classification from sklearn.model_selection im ...

  10. 前端基础进阶(六)-大厂面试题问题:循环闭包与setTimeout

    我在上一篇闭包的文章中留下了一个关于setTimeout与循环闭包的思考题. 利用闭包,修改下面的代码,让循环输出的结果依次为1, 2, 3, 4, 5 for (var i = 1; i <= ...