我拿这个站点作为案例:https://91mjw.com/  其他站点方法都是差不多的。

第一步:获得整站所有的视频连接

html = requests.get("https://91mjw.com",headers=gHeads).text
xmlcontent = etree.HTML(html)
UrlList = xmlcontent.xpath("//div[@class='m-movies clearfix']/article/a/@href")
NameList = xmlcontent.xpath("//div[@class='m-movies clearfix']/article/h2/a/text()")

  

第二步 :是进入选择的电影的页面 去获得视频的链接

UrlList = xmlContent.xpath("//div[@id='video_list_li']/a/@href")

第三步 构造下载视频用到的参数

第四步 下载视频 保存到本地

直接上实现代码  
   使用的多线程 加信号量实现  默认开启5条线程开始操作 每条线程去下载一套视频  是一套 一套 一套     
    也可以自己去修改同时开启几条线程
    实现代码

#!/usr/bin/env python
# -*- coding: utf-8 -*-
 
import re
import requests
from threading import *
from bs4 import BeautifulSoup
from lxml import etree
from contextlib import closing
 
nMaxThread = 5
connectlock = BoundedSemaphore(nMaxThread)
gHeads = {"User-Agent":"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36"}
 
class MovieThread(Thread):
    def __init__(self,url,movieName):
        Thread.__init__(self)
        self.url = url
        self.movieName = movieName
 
    def run(self):
        try:
            urlList = self.GetMovieUrl(self.url)
            for i in range(len(urlList)):
                type,vkey = self.GetVkeyParam(self.url,urlList[i])
                if type != None and vkey !=None:
                    payload,DownloadUrl = self.GetOtherParam(self.url,urlList[i],type,vkey)
                    if DownloadUrl :
                        videoUrl = self.GetDownloadUrl(payload,DownloadUrl)
                        if videoUrl :
                            self.DownloadVideo(videoUrl,self.movieName,i+1)
        finally:
            connectlock.release()
 
    def GetMovieUrl(self,url):
        heads = {
            "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36",
            "Host":"91mjw.com",
            "Referer":"https://91mjw.com/"
        }
        html = requests.get(url,headers=heads).text
        xmlContent = etree.HTML(html)
        UrlList = xmlContent.xpath("//div[@id='video_list_li']/a/@href")
        if  len(UrlList) > 0:
            return UrlList
        else:
            return None
 
    def GetVkeyParam(self,firstUrl,secUrl):
        heads = {
            "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36",
            "Host": "91mjw.com",
            "Referer": firstUrl
        }
        try :
            html = requests.get(firstUrl+secUrl,headers=heads).text
            bs = BeautifulSoup(html,"html.parser")
            content = bs.find("body").find("script")
            reContent = re.findall('"(.*?)"',content.text)
            return reContent[0],reContent[1]
        except:
            return None,None
 
    def GetOtherParam(self,firstUrl,SecUrl,type,vKey):
        url = "https://api.1suplayer.me/player/?userID=&type=%s&vkey=%s"%(type,vKey)
        heads = {
            "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36",
            "Host": "api.1suplayer.me",
            "Referer": firstUrl+SecUrl
        }
        try:
            html = requests.get(url,headers=heads).text
            bs = BeautifulSoup(html,"html.parser")
            content = bs.find("body").find("script").text
            recontent = re.findall(" = '(.+?)'",content)
            payload = {
                    "type":recontent[3],
                    "vkey":recontent[4],
                    "ckey":recontent[2],
                    "userID":"",
                    "userIP":recontent[0],
                    "refres":1,
                    "my_url":recontent[1]
                }
            return payload,url
        except:
            return None,None
 
    def GetDownloadUrl(self,payload,refereUrl):
        heads = {
            "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36",
            "Host": "api.1suplayer.me",
            "Referer": refereUrl,
            "Origin": "https://api.1suplayer.me",
            "X-Requested-With": "XMLHttpRequest"
        }
        while True:
            retData = requests.post("https://api.1suplayer.me/player/api.php",data=payload,headers=heads).json()
            if  retData["code"] == 200:
                return retData["url"]
            elif retData["code"] == 404:
                payload["refres"] += 1;
                continue
            else:
                return None
 
    def DownloadVideo(self,url,videoName,videoNum):
        CurrentSize = 0
        heads = {
            "chrome-proxy":"frfr",
            "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36",
            "Host":"sh-yun-ftn.weiyun.com",
            "Range":"bytes=0-"
        }
        with closing(requests.get(url,headers=heads)) as response:
            retSize = int(response.headers['Content-Length'])
            chunkSize = 10240
            if response.status_code == 206:
                print '[File Size]: %0.2f MB\n' % (retSize/1024/1024)
                with open("./video/%s/%02d.mp4"%(videoName,videoNum),"wb") as f:
                    for data in response.iter_content(chunk_size=chunkSize):
                        f.write(data)
                        CurrentSize += len(data)
                        f.flush()
                        print '[Progress]: %0.2f%%' % float(CurrentSize*100/retSize) + '\r'
 
def main():
    html = requests.get("https://91mjw.com",headers=gHeads).text
    xmlcontent = etree.HTML(html)
    UrlList = xmlcontent.xpath("//div[@class='m-movies clearfix']/article/a/@href")
    NameList = xmlcontent.xpath("//div[@class='m-movies clearfix']/article/h2/a/text()")
    for i in range(len(UrlList)):
        connectlock.acquire()
        url = UrlList[i]
        name = NameList[i].encode("utf-8")
        t = MovieThread(url,name)
        t.start()
 
if __name__ == '__main__':
    main()

  

用python实现多线程爬取影视网站全部视频方法【笔记】的更多相关文章

  1. from appium import webdriver 使用python爬虫,批量爬取抖音app视频(requests+Fiddler+appium)

    使用python爬虫,批量爬取抖音app视频(requests+Fiddler+appium) - 北平吴彦祖 - 博客园 https://www.cnblogs.com/stevenshushu/p ...

  2. python之简单爬取一个网站信息

    requests库是一个简介且简单的处理HTTP请求的第三方库 get()是获取网页最常用的方式,其基本使用方式如下 使用requests库获取HTML页面并将其转换成字符串后,需要进一步解析HTML ...

  3. 初次尝试python爬虫,爬取小说网站的小说。

    本次是小阿鹏,第一次通过python爬虫去爬一个小说网站的小说. 下面直接上菜. 1.首先我需要导入相应的包,这里我采用了第三方模块的架包,requests.requests是python实现的简单易 ...

  4. 用Python爬取影视网站,直接解析播放地址。

    记录时刻! 写这个爬虫主要是想让自己的爬虫实用,把脚本放到了服务器,成为可随时调用的接口. 思路算是没思路吧!把影视名带上去请求影视网站,然后解析出我们需要的播放地址. 我也把自己的接口分享出来.接口 ...

  5. Python多线程爬取某网站表情包

    # 爬取网络图片import requestsfrom lxml import etreefrom urllib import requestfrom queue import Queue # 导入队 ...

  6. python协程爬取某网站的老赖数据

    import re import json import aiohttp import asyncio import time import pymysql from asyncio.locks im ...

  7. 使用python爬虫,批量爬取抖音app视频(requests+Fiddler+appium)

    抖音很火,楼主使用python随机爬取抖音视频,并且无水印下载,人家都说天下没有爬不到的数据,so,楼主决定试试水,纯属技术爱好,分享给大家.. 1.楼主首先使用Fiddler4来抓取手机抖音app这 ...

  8. Python爬虫一爬取B站小视频源码

    如果要爬取多页的话 在最下方循环中 填写好循环的次数就可以了 项目源码 from fake_useragent import UserAgent import requests import time ...

  9. Python多进程方式抓取基金网站内容的方法分析

    因为进程也不是越多越好,我们计划分3个进程执行.意思就是 :把总共要抓取的28页分成三部分. 怎么分呢? # 初始range r = range(1,29) # 步长 step = 10 myList ...

随机推荐

  1. elasticsearch的数据写入流程及优化

    Elasticsearch 写入流程及优化 一. 集群分片设置:ES一旦创建好索引后,就无法调整分片的设置,而在ES中,一个分片实际上对应一个lucene 索引,而lucene索引的读写会占用很多的系 ...

  2. python 包多熟悉一个干活就轻松点

    包管理 管理包和依赖的工具. pip – Python 包和依赖关系管理工具. pip-tools – 保证 Python 包依赖关系更新的一组工具. conda – 跨平台,Python 二进制包管 ...

  3. python excel导入到数据库

    import xlrd import MySQLdb def inMySQL(file_name): wb = xlrd.open_workbook(file_name) sh = wb.sheet_ ...

  4. 13 JSP、MVC开发模式、EL表达式和JSPL标签+软件设计架构---学习笔记

    1.JSP (1)JSP概念:Java Server Pages 即java服务器端页面可以理解为:一个特殊的页面,其中既可以指定定义html标签,又可以定义java代码用于简化书写!!! (2)原理 ...

  5. 端口占用问题:java.net.BindException: Address already in use: bind

    解决方法 方法一:换一个端口 若仍然想要使用该端口,则可以将占用该端口的进程杀死即可. 方法二:杀死占用该端口的进程 若仍然想要使用该端口,则可以将占用该端口的进程杀死即可 查找端口被占用的进程id ...

  6. Response的应用

    1.HttpServletResponse概述 service方法中的response的类型是ServletResponse,而doGet/doPost方法的response的类型是HttpServl ...

  7. C++ 读取 MATLAB 的 .m 文件,并发送到 MATLAB 运行

    本代码是由「Microsoft Visual Studio 2015 Enterprise」编写. 想要了解更多 C++ 与 MATLAB 混合编程的知识,可以参考我的另一篇博客:C++ 与 MATL ...

  8. INNODB 统计信息采集

    SHOW GLOBAL VARIABLES LIKE 'INNODB_STATS_PERSISTENT_SAMPLE_PAGES'; ALTER TABLE TABLE_NAME STATS_SAMP ...

  9. [LOJ6433] [PKUSC2018] 最大前缀和

    题目链接 LOJ:https://loj.ac/problem/6433 Solution 注意到最大前缀要满足什么性质,假设序列\(a[1..n]\)的最大前缀是\(s_x\),那么显然要满足所有\ ...

  10. GOF 的23种JAVA常用设计模式 学习笔记 持续更新中。。。。

    前言: 设计模式,前人总结下留给后人更好的设计程序,为我们的程序代码提供一种思想与认知,如何去更好的写出优雅的代码,23种设计模式,是时候需要掌握它了. 1.工厂模式 大白话:比如你需要一辆汽车,你无 ...