Scraping Tweets Directly from Twitters Search Page – Part 1

Published January 8, 2015

EDIT – Since I wrote this post, Twitter has updated how you get the next list of tweets for your result. Rather than using scroll_cursor, it uses max_position. I’ve written a bit more in detail here.

In fairly recent news,
Twitter has started indexing it’s entire history of Tweets going all
the way back to 2006. Hurrah for data scientists! However, even with
this news (at time of writing), their search API is still restricted to
the past seven days of Tweets. While I doubt this will be the case
permanently, as a useful exercise this post presents how we can search
for Tweets from Twitter without necessarily using their API. Besides the
indexing, there is also the advantage that Twitter is a little more
liberal with rate limits, and you don’t require any authentication keys.

The post will be split up into two parts, this first part looking at
what we can extract from Twitter and how we might start to go about it,
and the second a tutorial on how we can implement this in Java.

Right, to begin, lets say we want to search Twitter for all tweets
related to the query “Babylon 5”. You can access Twitters advanced
search without being logged in: https://twitter.com/search-advanced

If we take a look at the URL that’s constructed when we perform the search we get:

https://twitter.com/search?q=Babylon%205&src=typd

As we can see, there are two query parameters, q (our query encoded) and src (assumed to be the source of the query, i.e. typed).  However, by default, Twitter returns top results, rather than all, so on the displayed page, if you click on All the URL changes to:

https://twitter.com/search?f=realtime&q=Babylon%205&src=typd

The difference here is the f=realtime parameter that appears to specify we receive Tweets in realtime as opposed to a subset of top Tweets. Useful to know, but currently we’re only getting the first 25 Tweets back. If we scroll down though, we notice that more Tweets are loaded on the page via AJAX. Logging all XMLHttpRequests in whatever dev tool you choose to use, we can see that everytime we reach the bottom of the page, Twitter makes an AJAX call a URL similar to:

https://twitter.com/i/search/timeline?f=realtime&q=Babylon%205&src=typd&include_available_features=1&include_entities=1&last_note_ts=85&scroll_cursor=TWEET-553069642609344512-553159310448918528-BD1UO2FFu9QAAAAAAAAETAAAAAcAAAASAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

On further inspection, we see that it is also a JSON response, which is very useful! Before we look at the response though, let’s have a look at that URL and some of it’s parameters.

First off, it’s slightly different to the default search URL. The path is /i/search/timeline as opposed to /search. Secondly, while we notice our familiar parameters q, f, and src, from before, there are several additional ones. The most important and essential new one though is scroll_cursor. This is what Twitter uses to paginate the results. If you remove scroll_cursor from that URL, you end up with your first page of results again.

Now lets take a look now at the JSON response that Twitter provides:

{
has_more_items: boolean,
items_html: "...",
is_scrolling_request: boolean,
is_refresh_request: boolean,
scroll_cursor: "...",
refresh_cursor: "...",
focused_refresh_interval: int
}

Again, not all parameters for this post are important to take note of, but the ones that are include: has_more_items, items_html, and scroll_cursor.

has_more_items – This lets you know with a boolean value whether or not there are any more results after this query.

items_html – This is where all the tweets are which Twitter uses to append to the bottom of their timeline. It requires parsing, but there is a good amount of information in there to be extracted which we will look at in a minute.

scroll_cursor – A pagination value that allows us to extract the next page of results.

Remember our scroll_cursor parameter from earlier on? Well for each search request you make to twitter, the value of this key in the response provides you with the next set of tweets, allowing you to recursively call Twitter until either has_more_items is false, or your previous scroll_cursor equals the last scroll_cursor you had.

Now that we know how to access Twitters own search functionality, lets turn our attention to the tweets themselves. As mentioned before, items_html in the response is where all the tweets are at. However, it comes in a block of HTML as Twitter injects that block at the bottom of the page each time that call is made. The HTML inside is a list of li elements, each element a Tweet. I won’t post the HTML for one here, as even one tweet has a lot of HTML in it, but if you want to look at it, copy the items_html value (omiting the quotes around the HTML content) and paste it into something like JSBeautifier to see the formatted results for yourself.

If we look over the HTML, aside from the tweets text, there is actually a lot of useful information encapsulated in this data packet. The most important item is the Tweet id itself. If you check, it’s actually in the root li element. Now, we could stop here as with that ID, you can query Twitters official API, and if it’s a public Tweet, you can get all kinds of information. However, that’d defeat the purpose of not using the API, so lets see what we can extract from what we already have.

The table below shows various CSS selector queries that you can use to extract the information with.

Embedded Tweet Data
Selector Value
div.original-tweet[data-tweet-id] The authors twitter handle
div.original-tweet[data-name] The name of the author
div.original-tweet[data-user-id] The user ID of the author
span._timestamp[data-time] Timestamp of the post
span._timestamp[data-time-ms] Timestamp of the post in ms
p.tweet-text  Text of Tweet
span.ProfileTweet-action–retweet > span.ProfileTweet-actionCount[data-tweet-stat-count] Number of Retweets
span.ProfileTweet-action–favorite > span.ProfileTweet-actionCount[data-tweet-stat-count]  Number of Favourites

That’s quite a sizeable amount of information in that HTML. From looking through, we can extract a bunch of stuff about the author, the time stamp of the tweet, the text, and number of retweets and favourites.

What have we learned here? Well, to summarize, we know how to construct a Twitter URL query, the response we get from said query, and the information we can extract from said response. The second part of this tutorial (to follow shortly) will introduce some code as to how we can implement the above.

Twitter数据抓取的方法(一)的更多相关文章

  1. Twitter数据抓取的方法(二)

    Scraping Tweets Directly from Twitters Search Page – Part 2 Published January 11, 2015 In the previo ...

  2. Twitter数据抓取的方法(三)

    Scraping Tweets Directly from Twitters Search – Update Published August 1, 2015 Sorry for my delayed ...

  3. Twitter数据抓取

    说明:这里分三个系列介绍Twitter数据的非API抓取方法.有兴趣的QQ群交流: BitCrawler网络爬虫QQ群 322937592 1.Twitter数据抓取(一) 2.Twitter数据抓取 ...

  4. Twitter数据非API采集方法

    说明:这里分三个系列介绍Twitter数据的非API抓取方法. 在一个老外的博看上看到的,想详细了解的可以自己去看原文. 这种方法可以采集基于关键字在twitter上搜索的结果推文,已经实现自动翻页功 ...

  5. python爬虫数据抓取方法汇总

    概要:利用python进行web数据抓取方法和实现. 1.python进行网页数据抓取有两种方式:一种是直接依据url链接来拼接使用get方法得到内容,一种是构建post请求改变对应参数来获得web返 ...

  6. Phantomjs+Nodejs+Mysql数据抓取(2.抓取图片)

    概要 这篇博客是在上一篇博客Phantomjs+Nodejs+Mysql数据抓取(1.抓取数据) http://blog.csdn.net/jokerkon/article/details/50868 ...

  7. Java实现多种方式的http数据抓取

    前言: 时下互联网第一波的浪潮已消逝,随着而来的基于万千数据的物联网时代,因而数据成为企业的重要战略资源之一.基于数据抓取技术,本文介绍了java相关抓取工具,并附上demo源码供感兴趣的朋友测试! ...

  8. 数据抓取的艺术(一):Selenium+Phantomjs数据抓取环境配置

     数据抓取的艺术(一):Selenium+Phantomjs数据抓取环境配置 2013-05-15 15:08:14 分类: Python/Ruby     数据抓取是一门艺术,和其他软件不同,世界上 ...

  9. [nodejs,expressjs,angularjs2] LOL英雄列表数据抓取及查询显示应用

    新手练习,尝试使用angularjs2 [angularjs2 数据绑定,监听数据变化自动修改相应dom值,非常方便好用,但与传统js(jquery)的使用方法会很不同,Dom操作也不太习惯] 应用效 ...

随机推荐

  1. PHP 端口号 是否 被占用 以及 解决方法

    开始---->运行---->cmd,或者是window+R组合键,调出命令窗口{PHP详尽配置环境:http://www.cnblogs.com/ordinaryk/p/6496398.h ...

  2. FaceNet---深度学习与人脸识别的二次结合

    今天我给大家带来一篇来自谷歌的文章,众所周知,谷歌是全世界最有情怀,最讲究技术的公司,比我们天朝的莆田广告商良心多了.还有就是前段时间的最强大脑,莆田广告商的那个小机器,也就忽悠忽悠行外人了,懂的人深 ...

  3. Azure Messaging-ServiceBus Messaging消息队列技术系列4-复杂对象消息是否需要支持序列化和消息持久化

    在上一篇中,我们介绍了消息的顺序收发保证: Azure Messaging-ServiceBus Messaging消息队列技术系列3-消息顺序保证 在本文中我们主要介绍下复杂对象消息是否需要支持序列 ...

  4. wemall app商城源码中实现带图片和checkbox的listview

    wemall-mobile是基于WeMall的android app商城,只需要在原商城目录下上传接口文件即可完成服务端的配置,客户端可定制修改.本文分享其中实现带图片和checkbox的listvi ...

  5. 1682: [Usaco2005 Mar]Out of Hay 干草危机

    1682: [Usaco2005 Mar]Out of Hay 干草危机 Time Limit: 5 Sec  Memory Limit: 64 MBSubmit: 391  Solved: 258[ ...

  6. 万人迷”微信小程序似乎开始掉粉 为什么呢?

    "万人迷"微信小程序最近似乎开始掉粉. 距离1月9日小程序上线已有一周,相比浓烈的讨论气氛,用户的使用热情逐步降低,而部分公司开始撤离小程序. 其中,逻辑思维旗下产品"得 ...

  7. 【转载】扩展Robot Framework,实现失败用例自动再执行(失败重跑)

    使用自动化脚本进行测试,经常受环境影响等各方面导致本能成功的脚本失败,下面介绍了RFS框架下,失败重跑的方法: 通过改写RobotFramework源代码增加--retry选项,实现test级别的失败 ...

  8. Nginx uWSGI web.py 站点搭建

    一.安装nginx 在安装nginx前,需要先装nginx的依赖包. 1.如果没有yum则先安装yum   删除原有的yum  rpm -aq|grep yum|xargs rpm -e --node ...

  9. JAVA连接数据库后,对数据库进行增删改查

    1.Statement 增删改: 方法:execute(String SQL) String url="jdbc:Access:///E://A//shop.mdb"; Conne ...

  10. 模拟一个shuffle

    之所以会想到写这么一个shuffle的例子,是因为一个需求:我需要把一个有序数组中的数据随机的打散.java代码如下, public void shuffle() { int[] arr = {1,2 ...