简单的新闻客户端APP开发(DCloud+thinkphp+scrapy)
前端时间花了1个月左右,搞了个新闻APP,功能很简单,就是把页面版的新闻条目定时爬到后台数据库,然后用app显示出来。
1.客户端
使用了DCloud框架,js基本是个新手,从没写过像样的代码,html5更是新手,索性直接使用现成的前端框架。APPcan,APICloud尝试过,最终选择DCloud,话说它的HBuild编辑器确实不错。
贴一部分关键代码: 使用DCloud的下拉刷新方法,使用ajax获取后台返回的json列表;
1 <!DOCTYPE html>
2 <html>
3
4 <head>
5 <meta charset="utf-8">
6 <meta name="viewport" content="width=device-width,initial-scale=1,minimum-scale=1,maximum-scale=1,user-scalable=no" />
7 <title></title>
8 <script src="js/mui.min.js"></script>
9 <link href="css/mui.min.css" rel="stylesheet" />
10 <script type="text/javascript" charset="utf-8">
11 //mui.init();
12 var t;
13 mui.init({
14 pullRefresh: {
15 container: "#pullMine", //下拉刷新容器标识,querySelector能定位的css选择器均可,比如:id、.class等
16 down: {
17 contentdown: "下拉可以刷新", //可选,在下拉可刷新状态时,下拉刷新控件上显示的标题内容
18 contentover: "释放立即刷新", //可选,在释放可刷新状态时,下拉刷新控件上显示的标题内容
19 contentrefresh: "正在刷新...", //可选,正在刷新状态时,下拉刷新控件上显示的标题内容
20 callback: pulldownRefresh //必选,刷新函数,根据具体业务来编写,比如通过ajax从服务器获取新数据;
21 }
22 }
23 });
24
25 mui.plusReady(function() {
26 console.log("当前页面URL:" + plus.webview.currentWebview().getURL());
27 mui.ajax('http://202.110.123.123:801/newssystem/index.php/Home/News/getlist_sd', {
28 dataType: 'json',
29 type: 'get',
30 timeout: 10000,
31 success: function(data) {
32 t=data;
33 var list = document.getElementById("list");
34 var finallist = '';
35 for (i = data.length - 1; i >= 0; i--) {
36 finallist = finallist + '<li data-id="' + i + '" class="mui-table-view-cell" ><a class="mui-navigate-right"><div class="mui-media-body">' + data[i].title + '<p class="mui-ellipsis">' + data[i].pubtime + '</p></div></a></li>';
37 }
38 list.innerHTML = finallist;
39 console.log("no1"+finallist);
40 mui('#list').on('tap', 'li', function() {
41 mui.openWindow({
42 url: 'detail_sd.html',
43 id: 'detail_sd',
44 extras: {
45 title: t[this.getAttribute('data-id')].title,
46 author: t[this.getAttribute('data-id')].author,
47 pubtime: t[this.getAttribute('data-id')].pubtime,
48 content: t[this.getAttribute('data-id')].content
49 }
50 })
51
52 })
53 },
54 error: function() {}
55 })
56 })
57
58 //下拉刷新
59 //
60
61
62
63 /**
64 * 下拉刷新具体业务实现
65 */function pulldownRefresh() {
66 setTimeout(function() {
67 console.log("refreshing....");
68 mui.ajax('http://202.110.123.123:801/newssystem/index.php/Home/News/getlist_sd', {
69 dataType: 'json',
70 type: 'get',
71 timeout: 10000,
72 success: function(data) {
73 t=data;
74 var list = document.getElementById("list");
75 var finallist = '';
76 for (i = data.length - 1; i >= 0; i--) {
77 finallist = finallist + '<li data-id="' + i + '" class="mui-table-view-cell" ><a class="mui-navigate-right"><div class="mui-media-body">' + data[i].title + '<p class="mui-ellipsis">' + data[i].pubtime + '</p></div></a></li>';
78 // finallist=finallist+'<li data-id="'+i+'" class="mui-table-view-cell" ><a class="mui-navigate-right"><div class="mui-media-body">'+data[i].title+'<p class="mui-ellipsis">'+data[i].content+'</p></div></a></li>';
79 }
80 list.innerHTML = finallist;
81
82
83 },
84 error: function() {}
85 });
86 mui('#pullMine').pullRefresh().endPulldownToRefresh(); //refresh completed
87
88 }, 1500);
89 }
90 </script>
91 </head>
92
93 <body>
94
95 <!--<div id="pullMine" class="mui-content mui-scroll-wrapper">
96 <div class="mui-scroll">
97 <ul class="mui-table-view" id="list">
98
99 </ul>
</div>
</div>-->
<div id="pullMine" class="mui-content mui-scroll-wrapper">
<div class="mui-scroll">
<ul class="mui-table-view" id="list">
</ul>
</div>
</div>
</body>
115 </html>
2.后台PHP发布端
使用了thinkphp框架
1 <?php
2 namespace Home\Controller;
3 use Think\Controller;
4 class NewsController extends Controller {
5 public function getlist(){
6 $newsList=M('news')->order('pubtime asc')->limit(30)->select();
7 echo json_encode($newsList);
8 }
9 public function getlist_sd(){
$newsList=M('newssd')->order('pubtime asc')->limit(30)->select();
echo json_encode($newsList);
}
13 ?>
3.后台爬虫
使用了scrapy,爬取新闻内容写入DB
pipelines.py
1 # -*- coding: utf-8 -*-
2
3 # Define your item pipelines here
4 #
5 # Don't forget to add your pipeline to the ITEM_PIPELINES setting
6 # See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
7
8 from scrapy import signals
9 import json
import codecs
from twisted.enterprise import adbapi
from datetime import datetime
from hashlib import md5
import MySQLdb
import MySQLdb.cursors
class JsonWithEncodingtutorialPipeline(object):
def __init__(self):
self.file = codecs.open('qdnews.json', 'w', encoding='utf-8')
def process_item(self, item, spider):
line = json.dumps(dict(item), ensure_ascii=False) + "\n"
self.file.write(line)
return item
def spider_closed(self, spider):
self.file.close()
class MySQLStoretutorialPipeline(object):
def __init__(self, dbpool):
self.dbpool = dbpool
print("-----------init sql proc---")
@classmethod
def from_settings(cls, settings):
dbargs = dict(
host=settings['MYSQL_HOST'],
db=settings['MYSQL_DBNAME'],
user=settings['MYSQL_USER'],
passwd=settings['MYSQL_PASSWD'],
charset='utf8',
cursorclass = MySQLdb.cursors.DictCursor,
use_unicode= True,
)
dbpool = adbapi.ConnectionPool('MySQLdb', **dbargs)
return cls(dbpool)
#pipeline默认调用
def process_item(self, item, spider):
d = self.dbpool.runInteraction(self._do_upinsert, item, spider)
d.addErrback(self._handle_error, item, spider)
d.addBoth(lambda _: item)
return d
#将每行更新或写入数据库中
def _do_upinsert(self, conn, item, spider):
print (item['link'][0])
linkmd5id = self._get_linkmd5id(item)
print linkmd5id
print("--------------")
now = datetime.now().replace(microsecond=0).isoformat(' ')
#now=datetime2timestamp(datetime.datetime.now())
conn.execute("""
select 1 from tp_news where linkmd5id = %s
""", (linkmd5id, ))
ret = conn.fetchone()
print ('ret=',ret)
if ret:
print ""
conn.execute("""
update tp_news set title = %s, content = %s, author = %s,pubtime = %s, pubtime2 = %s,link = %s, updated = %s where linkmd5id = %s
""", (item['title'][0][4:-5], item['content'][0], item['pubtime'][0][16:-4],item['pubtime'][0][-14:-4], item['pubtime'][0][-14:-4],item['link'][0], now, linkmd5id))
#print """
# update tp_news_2 set title = %s, description = %s, link = %s, listUrl = %s, updated = %s where linkmd5id = %s
#""", (item['title'], item['desc'], item['link'], item['listUrl'], now, linkmd5id)
else:
print ''
conn.execute("""
insert into tp_news(linkmd5id, title, content, author,link, updated, pubtime, pubtime2)
values(%s, %s, %s, %s, %s,%s,%s,%s)
""", (linkmd5id, item['title'][0][4:-5], item['content'][0], item['pubtime'][0][16:-4],item['link'][0], now,item['pubtime'][0][-14:-4], item['pubtime'][0][-14:-4]))
#print """
# insert into tp_news_2(linkmd5id, title, description, link, listUrl, updated)
# values(%s, %s, %s, %s, %s, %s)
#""", (linkmd5id, item['title'], item['desc'], item['link'], item['listUrl'], now)
#获取url的md5编码
def _get_linkmd5id(self, item):
#url进行md5处理,为避免重复采集设计
s=md5(item['link'][0]).hexdigest()
#print (s)
#print(md5(item['link']).hexdigest())
return s
#异常处理
def _handle_error(self, failue, item, spider):
93 log.err(failure)
items.py
1 # -*- coding: utf-8 -*-
2
3 # Define here the models for your scraped items
4 #
5 # See documentation in:
6 # http://doc.scrapy.org/en/latest/topics/items.html
7
8 import scrapy
9
class DmozItem(scrapy.Item):
# define the fields for your item here like:
# name = scrapy.Field()
pubtime=scrapy.Field()
title=scrapy.Field()
link=scrapy.Field()
desc=scrapy.Field()
content=scrapy.Field()
19 id=scrapy.Field()
spiders.py
1 from scrapy.spider import BaseSpider
2 from scrapy.selector import HtmlXPathSelector
3 from tutorial.items import DmozItem
4 from scrapy.http import Request
5 from scrapy.utils.response import get_base_url
6 from scrapy.utils.url import urljoin_rfc
7 from urllib2 import urlopen
8 from BeautifulSoup import BeautifulSoup
9
10 from scrapy.spiders import CrawlSpider
11 from scrapy.loader import ItemLoader
12 from scrapy.linkextractors.sgml import SgmlLinkExtractor
13
14
15 import scrapy
16 class DmozSpider(BaseSpider):
17 name = "dmoz"
18 allowed_domains = ["dmoz.org"]
19 start_urls = [
20 "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
21 "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
22 ]
23 def parse(self, response):
24 # filename = response.url.split("/")[-2]
25 # open(filename, 'wb').write(response.body)
26 hxs=HtmlXPathSelector(response)
27 sites=hxs.select('//ul/li')
28 items=[]
29 for site in sites:
30 item=DmozItem()
31 item['title']=site.select('a/text()').extract()
32 item['link']=site.select('a/@href').extract()
33 item['desc']=site.select('text()').extract()
34 items.append(item)
35 return items
36
37 class DmozSpider2(BaseSpider):
38 name = "dmoz2"
39 allowed_domains = ["10.60.32.179"]
40 start_urls = [
41 "http://10.60.32.179/Site/Site1/myindex.shtml",
42 #"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
43 ]
44 def parse(self, response):
45 # filename = response.url.split("/")[-2]
46 # open(filename, 'wb').write(response.body)
47 hxs=HtmlXPathSelector(response)
48 sites=hxs.select('//*[@id="_ctl0_LblContent"]/div/div//ul/li')
49 items=[]
50 for site in sites:
51 item=DmozItem()
52 item['date']=site.select('span/text()').extract()
53 item['title']=site.select('a/text()').extract()
54 item['link']=site.select('a/@href').extract()
55 item['desc']=site.select('text()').extract()
56 items.append(item)
57 return items
58
59
60 class MySpider(BaseSpider):
61 name = "myspider"
62 allowed_domains = ["10.60.32.179"]
63 start_urls = [
64 'http://10.60.32.179/Site/Site1/myindex.shtml',
65 #'http://example.com/page2',
66 ]
67 def parse(self, response):
68 # collect `item_urls`
69 hxs=HtmlXPathSelector(response)
70 item_urls=hxs.select('//*[@id="_ctl0_LblContent"]/div/div//ul/li')
71 base_url = get_base_url(response)
72 items=[]
73 for item_url in item_urls:
74
75 yield Request(url=response.url, callback=self.parse_item,meta={'items': items})
76
77 def parse_item(self, response):
78 hxs=HtmlXPathSelector(response)
79 item_urls=hxs.select('//*[@id="_ctl0_LblContent"]/div/div//ul/li')
80
81 item=DmozItem()
82 items=response.meta['items']
83 item['date']=item_urls.select('span/text()').extract()
84 item['title']=item_urls.select('a/text()').extract()
85 item['link']=item_urls.select('a/@href').extract()
86 item['desc']=item_urls.select('text()').extract()
87
88 # item_details_url=item['link']
89 # populate `item` fields
90 relative_url=item_urls.select('a/@href').extract()
91 print(relative_url[0])
92 base_url = get_base_url(response)
93 item_details_url=urljoin_rfc(base_url, relative_url[0])
94 yield Request(url=item_details_url,callback=self.parse_details,dont_filter=True,meta={'item':item,'items':items})
95
96 def parse_details(self, response):
97 #item = response.meta['item']
98 # populate more `item` fields
99 print("***********************In parse_details()***************")
hxs=HtmlXPathSelector(response)
print("-------------------------------")
print(response.url)
item_detail=hxs.select('/html/body/center/div/div[4]/div[1]/p[1]').extract()
print("________________",item_detail)
item=response.meta['item']
item['detail']=item_detail
items=response.meta['items']
items.append[item]
return items
class DmozSpider3(BaseSpider):
name = "dmoz3"
allowed_domains = ["10.60.32.179"]
start_urls = [
'http://10.60.32.179/Site/Site1/myindex.shtml',
]
def parse(self, response):
hxs=HtmlXPathSelector(response)
sites=hxs.select('//*[@id="_ctl0_LblContent"]/div/div//ul/li')
items=[]
for site in sites:
item=DmozItem()
item['date']=site.select('span/text()').extract()
item['title']=site.select('a/text()').extract()
item['link']=site.select('a/@href').extract()
item['desc']=site.select('text()').extract()
print(item['link'][0])
base_url = get_base_url(response)
relative_url=item['link'][0]
item_details_url=urljoin_rfc(base_url, relative_url)
print("*********************",item_details_url)
#response2=BeautifulSoup(urlopen(item_details_url).read())
response2=scrapy.http.Response(item_details_url)
hxs2=HtmlXPathSelector(response2)
item['detail']=hxs2.select('/html/body/center/div/div[4]/div[1]/p[1]').extract()
items.append(item)
return items
class MySpider5(BaseSpider):
name = "myspider5"
allowed_domains = ["10.60.32.179"]
start_urls = [
'http://10.60.32.179/Site/Site1/myindex.shtml',
#'http://example.com/page2',
]
items=[]
item=DmozItem()
def parse(self, response):
# collect `item_urls`
hxs=HtmlXPathSelector(response)
item_urls=hxs.select('//*[@id="_ctl0_LblContent"]/div/div//ul/li')
base_url = get_base_url(response)
for item_url in item_urls:
MySpider5.item['date']=item_url.select('span/text()').extract()
MySpider5.item['title']=item_url.select('a/text()').extract()
MySpider5.item['link']=item_url.select('a/@href').extract()
MySpider5.item['desc']=item_url.select('text()').extract()
relative_url=MySpider5.item['link']
print(relative_url[0])
base_url = get_base_url(response)
item_details_url=urljoin_rfc(base_url, relative_url[0])
print 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx=', str(item_details_url)
yield Request(url=item_details_url, callback=self.parse_details)
# def parse_item(self, response):
# hxs=HtmlXPathSelector(response)
# item_urls=hxs.select('//*[@id="_ctl0_LblContent"]/div/div//ul/li')
# # item_details_url=item['link']
# # populate `item` fields
# relative_url=item_urls.select('a/@href').extract()
# print(relative_url[0])
# base_url = get_base_url(response)
# item_details_url=urljoin_rfc(base_url, relative_url[0])
# print 'item urls============================================================='
# yield Request(url=item_details_url,callback=self.parse_details,dont_filter=True,meta={'item':item,'items':items})
def parse_details(self, response):
#item = response.meta['item']
# populate more `item` fields
print("***********************In parse_details()***************")
hxs=HtmlXPathSelector(response)
print("----------------------------------------------------------------")
print(response.url)
item_detail=hxs.select('/html/body/center/div/div[4]/div[1]/p[1]').extract()
print("________________",item_detail)
#item=response.meta['item']
#item['detail']=item_detail
#items.append(item)
MySpider5.item['detail']=item_detail
# item['detail']=item_detail
MySpider5.items.append(MySpider5.item)
return MySpider5.item
def parse_details2(self, response):
#item = response.meta['item']
# populate more `item` fields
bbsItem_loader=ItemLoader(item=DmozItem(),response=response)
url=str(response.url)
bbsItem_loader.add_value('title',item['title'])
abc={
'detail':'/html/body/center/div/div[4]/div[1]/p[1]'}
bbsItem_loader.add_xpath('detail',abc['detail'])
return bbsItem_loader.load_item()
class MySpider6(CrawlSpider):
name = "myspider6"
allowed_domains = ["10.60.32.179"]
start_urls = [
'http://10.60.32.179/Site/Site1/myindex.shtml',
#'http://example.com/page2',
]
link_extractor={
# 'page':SgmlLinkExtractor(allow='/bbsdoc,board,\w+\.html$'),
# 'page_down':SgmlLinkExtractor(allow='/bbsdoc,board,\w+,page,\d+\.html$'),
'page':SgmlLinkExtractor(allow='/Article/\w+\/\w+\.shtml$'),
}
_x_query={
'date':'span/text()',
'date2':'/html/body/center/div/div[4]/p',
'title':'a/text()',
'title2':'/html/body/center/div/div[4]/h2'
}
_y_query={
'detail':'/html/body/center/div/div[4]/div[1]/p[1]',
}
def parse(self,response):
self.t=0
for link in self.link_extractor['page'].extract_links(response):
yield Request(url=link.url,callback=self.parse_content)
self.t=self.t+1
def parse_content(self,response):
bbsItem_loader=ItemLoader(item=DmozItem(),response=response)
url=str(response.url)
bbsItem_loader.add_value('desc',url)
bbsItem_loader.add_value('link',url)
bbsItem_loader.add_xpath('title',self._x_query['title2'])
bbsItem_loader.add_xpath('pubtime',self._x_query['date2'])
bbsItem_loader.add_xpath('content',self._y_query['detail'])
bbsItem_loader.add_value('id',self.t) #why not useful?
return bbsItem_loader.load_item()
class MySpider6SD(CrawlSpider):
name = "myspider6sd"
allowed_domains = ["10.60.7.45"]
start_urls = [
'http://10.60.7.45/SITE_sdyc_WEB/Site1219/index.shtml',
#'http://example.com/page2',
]
link_extractor={
# 'page':SgmlLinkExtractor(allow='/bbsdoc,board,\w+\.html$'),
# 'page_down':SgmlLinkExtractor(allow='/bbsdoc,board,\w+,page,\d+\.html$'),
'page':SgmlLinkExtractor(allow='/Article/\w+\/\w+\.shtml$'),
#http://10.60.32.179/Site/Col411/Article/201510/35770_2015_10_29_8058797.shtml
#http://10.60.7.45/SITE_sdyc_WEB/Col1527/Article/201510/sdnw_2110280_2015_10_29_91353216.shtml
}
_x_query={
'date':'span/text()',
'date2':'/html/body/center/div/div[4]/p',
'title':'a/text()',
#'title2':'/html/body/center/div/div[4]/h2'
'title2':'/html/body/div[4]/div[1]/div[2]/div[1]/h1[2]/font'
#'author':'/html/body/div[4]/div[1]/div[2]/div[1]/div/span[1]'
#'pubtime2':'/html/body/div[4]/div[1]/div[2]/div[1]/div/span[2]'
}
_y_query={
#'detail':'/html/body/center/div/div[4]/div[1]/p[1]',
'detail':'//*[@id="Zoom"]'
}
def parse(self,response):
self.t=0
for link in self.link_extractor['page'].extract_links(response):
yield Request(url=link.url,callback=self.parse_content)
self.t=self.t+1
def parse_content(self,response):
bbsItem_loader=ItemLoader(item=DmozItem(),response=response)
url=str(response.url)
bbsItem_loader.add_value('desc',url)
bbsItem_loader.add_value('link',url)
bbsItem_loader.add_xpath('title',self._x_query['title2'])
bbsItem_loader.add_xpath('pubtime',self._x_query['date2'])
bbsItem_loader.add_xpath('content',self._y_query['detail'])
bbsItem_loader.add_value('id',self.t) #why not useful?
336 return bbsItem_loader.load_item()
简单的新闻客户端APP开发(DCloud+thinkphp+scrapy)的更多相关文章
- 史上最简单的个人移动APP开发入门--jQuery Mobile版跨平台APP开发
书是人类进步的阶梯. ——高尔基 习大大要求新新人类要有中国梦,鼓励大学生们一毕业就创业.那最好的创业途径是什么呢?就是APP.<构建跨平台APP-jQuery Mobile移动应用实战> ...
- OuNews 简单的新闻客户端应用源码
一直想练习MVP模式开发应用,把学习的RxJava.Retrofit等热门的开源库结合起来,于是写了这么一款新闻阅读软件, 有新闻.图片.视频三大模块,使用Retrofit和Okhttp实现无网读缓存 ...
- 指令汇B新闻客户端开发(三) 下拉刷新
现在我们继续这个新闻客户端的开发,今天分享的是下拉刷新的实现,我们都知道下拉刷新是一个应用很常见也很实用的功能.我这个应用是通过拉ListView来实现刷新的,先看一张刷新的原理图 从图中可知,手指移 ...
- 简单记账本APP开发一
在对Android的一些基础的知识有了一定了解,以及对于AndroidStudio的如何使用有了 一定的熟悉后,决定做一个简单的记账本APP 开发流程 1.记账本的页面 2.可以添加新的账目 (一)页 ...
- 开源:我的Android新闻客户端,速度快、体积小、支持离线阅读、操作简便、内容展现形式丰富多样、信息量大、功能全面 等(要代码的留下邮箱)
分享:我的Android新闻客户端,速度快.体积小.支持离线阅读.操作简便.内容展现形式丰富多样.信息量大.功能全面 等(要代码的留下邮箱) 历时30天我为了开发这个新闻客户端APP,以下简称觅闻 h ...
- IOS版新闻客户端应用源码项目
IOS版新闻客户端应用源码,这个是一款简单的新闻客户端源码,该应用实现没采用任何第三方类库的 ,并且这个应用的UI做得很不错的,值得我们的参考和学习,希望大家可以更加完善这款新闻类的应用吧. 源码下载 ...
- app开发中如何利用sessionId来实现服务端与客户端保持回话
app开发中如何利用sessionId来实现服务端与客户端保持回话 这个问题太过于常见,也过于简单,以至于大部分开发者根本没有关注过这个问题,我根据和我沟通的开发者中,总结出来常用的方法有以下几种: ...
- android 学习随笔九(网络:简单新闻客户端实现)
1.简单新闻客户端 <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xm ...
- 跨平台移动端APP开发---简单高效的MUI框架
MUI是dcloud(数字天堂)公司旗下的一款跨平台开发移动APP的框架产品,在学习MUI框架之前,最先接触了Hbuilder代码编辑器,它带给我的第一感觉是快,这是HBuilder的最大优势,通过完 ...
随机推荐
- 对相同id的input框的循环判断
$("input[id=sl]").each(function(){ alert(10); });
- C# ZXing.Net生成二维码、识别二维码、生成带Logo的二维码(二)
1.使用ZXint.Net生成带logo的二维码 /// <summary> /// 生成带Logo的二维码 /// </summary> /// <param name ...
- Asp.Net WebApi 启用CORS跨域访问指定多个域名
1.后台action指定 EnableCors指定可访问的域名多个,使用逗号隔开 //支持客户端凭据提交,指定多个域名,使用逗号隔开 [EnableCors("http://localhos ...
- RecycleView 滑动到底部,加载更多
android.support.v7 包提供了一个新的组件:RecycleView,用以提供一个灵活的列表试图.显示大型数据集,它支持局部刷新.显示动画等功能,可以用来取代ListView与GridV ...
- UI基本之UITextField相关方法属性
//初始化textfield并设置位置及大小 UITextField *text = [[UITextField alloc]initWithFrame:CGRectMake(, , , )]; // ...
- js子窗口修改父窗口内容
在框架中,我用 JavaScript 获取 JSON 数据,组织成 HTML 代码,最后将其填充至上层文档的一个元素中.按照一般的写法,我们需要用到类似如下的语句: 1.window.parent.d ...
- (六)Angularjs - 启动引导
自动引导 AngularJs 通过 ng-app 指令进行自动引导 手工引导启动框架 如果一个HTML文件中 有多个ng-app,AngularJS只会自动引导启动它找到的第一个ng-app应用,这是 ...
- 武汉科技大学ACM :1005: A+B for Input-Output Practice (V)
Problem Description Your task is to calculate the sum of some integers. Input Input contains an inte ...
- jq原创幻灯片插件slideV1.0
jq各种插件层出不穷,当然幻灯片插件也不例外,于是本人也自已写了一款,对于目前所做项目来说,足够用了,slideV1.0插件使用很简单,配置如下: 1.三种按钮类型接口选择(默认类型.数字类型.缩略图 ...
- 初探react
知道这个框架有一段时间了,可是项目中没有用到,也懒得整理,最近两天比较清闲,又想起了它.好了,废话不多说了,都是干货. react是个什么框架? 为什么用react? react 的原理 react有 ...