SELECT DISTINCT
o.name AS Object_Name,
o.type_desc
FROM sys.sql_modules m
INNER JOIN
sys.objects o
ON m.object_id = o.object_id
WHERE m.definition Like '%\[phonenumber\]%' ESCAPE '\';
从mysql取数据到dataframe,并新增两列,标记出是否买点或者卖点
import pandas as pd
import datetime
import pymysql
import numpy as np
import matplotlib.pyplot as plt
connection = pymysql.connect(host='localhost',user='root',password='MYSQLTB',db='shfuture')
try:
query = "SELECT happentime,lastprice FROM if1901_20190102 "
df = pd.read_sql(query, connection)
IsSellPoint = []
IsBuyPoint = []
for index, row in df.iterrows():
# print("current row price is : " , row['lastprice'])
currentprice = row['lastprice']
# print(df.iloc[index+1:len(df.index), 1].to_numpy())
my_list1 = df.iloc[index+1:len(df.index), 1].to_numpy()
# print(any(i < currentprice - 20 for i in my_list1))
# print("******")
# row['IfMaiDian ']=any(i < currentprice - 20 for i in my_list1)
IsSellPoint.append(any(i < currentprice - 20 for i in my_list1))
IsBuyPoint.append(any(i > currentprice + 20 for i in my_list1))
# print(df.to_string())
df["IsSellPoint"] = IsSellPoint
df["IsBuyPoint"] = IsBuyPoint
print(df.head(10))
df.to_csv('20230516.csv', sep='\t', encoding='utf-8')
# plt.figure()
# x = df['happentime']
# y1 = df['lastprice']
# plt.plot(x,y1)
# plt.show()
finally:
connection.close()
mac + android studio 如何运行gradle 命令
进入 gradle ——–> ./gradlew
scikit-learning 学习笔记
版本 0.24.2
先看这本电子书, 哈佛95后写的入门: https://dafriedman97.github.io/mlbook/content/c1/concept.html
中文版本的这个也还可以: https://lulaoshi.info/machine-learning/linear-model/minimise-loss-function.html
名字对照: linear regression 线性回归
loss function. 损失函数
怎么用heidisql 写mysql 5.1 的存储过程
例子:
DELIMITER $$ create PROCEDURE loopTables111 () BEGIN DECLARE done INT; DECLARE TableName VARCHAR(17); DECLARE TablesCursor CURSOR FOR SELECT TABLE_NAME FROM INFORMATION_SCHEMA.TABLES WHERE SUBSTRING_INDEX(TABLE_NAME,'_',-1) = '20210826'; DECLARE CONTINUE HANDLER FOR NOT FOUND SET done = 1; #DECLARE CONTINUE HANDLER FOR NOT FOUND SET Finished = TRUE; OPEN TablesCursor; MainLoop: LOOP FETCH TablesCursor INTO TableName; SELECT TableName; END LOOP; CLOSE TablesCursor; END$$ DELIMITER ;
怎么做一个带login 的python 定时爬虫程序
网络爬虫(英語:web crawler),也叫網路蜘蛛(spider)
python有这样几个库:
-
BeautifulSoup: Beautiful soup is a library for parsing HTML and XML documents. Requests (handles HTTP sessions and makes HTTP requests) in combination with BeautifulSoup (a parsing library) are the best package tools for small and quick web scraping. For scraping simpler, static, less-JS related complexities, then this tool is probably what you’re looking for. If you want to know more about BeautifulSoup, please refer to my previous guide on Extracting Data from HTML with BeautifulSoup.
lxml is a high-performance, straightforward, fast, and feature-rich parsing library which is another prominent alternative to BeautifulSoup.
-
Scrapy: Scrapy is a web crawling framework that provides a complete tool for scraping. In Scrapy, we create Spiders which are python classes that define how a particular site/sites will be scrapped. So, if you want to build a robust, concurrent, scalable, large scale scraper, then Scrapy is an excellent choice for you. Also, Scrapy comes with a bunch of middlewares for cookies, redirects, sessions, caching, etc. that helps you to deal with different complexities that you might come across. If you want to know more about Scrapy, please refer to my previous guide on Crawling the Web with Python and Scrapy.
- Selenium For heavy-JS rendered pages or very sophisticated websites, Selenium webdriver is the best tool to choose. Selenium is a tool that automates the web-browsers, also known as a web-driver. With this, you can open a Google Chrome/Mozilla Firefox automated window, which visits a URL and navigates on the links. However, it is not as efficient as the tools which we have discussed so far. This tool is something to use when all doors of web scraping are being closed, and you still want the data which matters to you. If you want to know more about Selenium, please refer to Web Scraping with Selenium.
其中 scrapy 简单的例子 : https://www.digitalocean.com/community/tutorials/how-to-crawl-a-web-page-with-scrapy-and-python-3
以上例子是不需要login的
如果需要login , 要用到。scrapy 的 formrequest。
以 https://ktu3333.asuscomm.com:9085/enLogin.htm
为例
测试login已成功
scrapy 抓取的只是静态内容, 目标网页含有js 和ajax , 需要配合 selenium 和 webdrive一起用
原因见: https://www.geeksforgeeks.org/scrape-content-from-dynamic-websites/
mac 如何安装 chrome web drive
https://www.swtestacademy.com/install-chrome-driver-on-mac/
2021-07-27 : login 不再使用 scrapy , 因为它login之后和selenium 不是一个session , 所以直接用selenium login
找element用 xpath , 注意xpath里如果用到参数的写法
目前为止能运行的代码: 还没加定时功能 , python 版本3.8.6
from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import NoSuchElementException import time options = webdriver.ChromeOptions() options.add_argument('ignore-certificate-errors') driver = webdriver.Chrome(chrome_options=options) # driver = webdriver.Chrome('/usr/local/bin/chromedriver') driver.get("https://ktu3333.asuscomm.com:9085/enLogin.htm") # try: # element = WebDriverWait(driver, 20).until( # EC.visibility_of_element_located((By.ID, "login_button")) # ) # finally: # driver.quit() time.sleep(5) print("login page finish loaded") # find username/email field and send the username itself to the input field driver.find_element_by_id("loginname").send_keys("TheStringOfUsername") # find password input field and insert password as well driver.find_element_by_id("loginpass").send_keys("TheStringOfPassword") # click login button driver.find_element_by_id("login_button").click() time.sleep(5) print("status page finish loaded") driver.get("https://ktu3333.asuscomm.com:9085/enHBSim.htm") time.sleep(20) print("redirect success") try: # rows_in_table = driver.find_elements_by_class_name("TB") # print("the table exist") # for row in rows_in_table.find_elements_by_css_selector('tr'): # for cell in row.find_elements_by_tag_name('td'): # print(cell.text) # table = driver.find_element_by_id("OvefrviewInfo") # print("the table exist") # # for i in table: # # tbody = i.find_element_by_tag_name('tbody') # rows = table.find_elements(By.TAG_NAME, "tr") # get all of the rows in the table # for row in rows: # # cols = row.find_elements(By.TAG_NAME, "td") # get all of the rows in the table # # Get the columns (all the column 2) # col1 = row.find_elements(By.TAG_NAME, "td")[1] #note: index start from 0, 1 is col 2 # print(col1.text) #prints text from the element # col2 = row.find_elements(By.TAG_NAME, "td")[2] #note: index start from 0, 1 is col 2 # print(col2.text) #prints text from the element # col3 = row.find_elements(By.TAG_NAME, "td")[3] #note: index start from 0, 1 is col 2 # print(col3.text) #prints text from the element # # col1= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr[1]/td[3]') # print(col1.text) # col2= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr[1]/td[4]') # print(col2.text) tbody = driver.find_element_by_xpath('//*[@id="OverviewInfo"]') trows = driver.find_elements_by_xpath('//*[@id="OverviewInfo"]/tr') print("the tbody exist") print("total rows is : ") print(len(trows)) for i in range(1,len(trows)+1): # col1= driver.find_element_by_xpath('//*[@id="OverviewInfo"]/tr[i]/td[1]') # print("find col1") col2= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[2]') # print("find col2") col3= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[3]') # print("find col3") col4= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[4]') # print("find col4") col5= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[5]') col6= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[6]') # print("find col5") print( col2.text , '\t' ,col3.text , '\t' , col4.text , '\t' , col5.text, '\t',col6.text) except NoSuchElementException: print("Element does not exist") driver.close()
改进版,把结果放到一个json 数组
from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.common.exceptions import NoSuchElementException import time import json options = webdriver.ChromeOptions() options.add_argument('ignore-certificate-errors') driver = webdriver.Chrome(chrome_options=options) # driver = webdriver.Chrome('/usr/local/bin/chromedriver') driver.get("https://ktu3333.asuscomm.com:9085/enLogin.htm") # try: # element = WebDriverWait(driver, 20).until( # EC.visibility_of_element_located((By.ID, "login_button")) # ) # finally: # driver.quit() time.sleep(5) print("login page finish loaded") # find username/email field and send the username itself to the input field driver.find_element_by_id("loginname").send_keys("StringOfUserName") # find password input field and insert password as well driver.find_element_by_id("loginpass").send_keys("StringOfPassword") # click login button driver.find_element_by_id("login_button").click() time.sleep(5) print("status page finish loaded") driver.get("https://ktu3333.asuscomm.com:9085/enHBSim.htm") time.sleep(20) print("redirect success") try: # rows_in_table = driver.find_elements_by_class_name("TB") # print("the table exist") # for row in rows_in_table.find_elements_by_css_selector('tr'): # for cell in row.find_elements_by_tag_name('td'): # print(cell.text) # table = driver.find_element_by_id("OvefrviewInfo") # print("the table exist") # # for i in table: # # tbody = i.find_element_by_tag_name('tbody') # rows = table.find_elements(By.TAG_NAME, "tr") # get all of the rows in the table # for row in rows: # # cols = row.find_elements(By.TAG_NAME, "td") # get all of the rows in the table # # Get the columns (all the column 2) # col1 = row.find_elements(By.TAG_NAME, "td")[1] #note: index start from 0, 1 is col 2 # print(col1.text) #prints text from the element # col2 = row.find_elements(By.TAG_NAME, "td")[2] #note: index start from 0, 1 is col 2 # print(col2.text) #prints text from the element # col3 = row.find_elements(By.TAG_NAME, "td")[3] #note: index start from 0, 1 is col 2 # print(col3.text) #prints text from the element # # col1= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr[1]/td[3]') # print(col1.text) # col2= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr[1]/td[4]') # print(col2.text) tbody = driver.find_element_by_xpath('//*[@id="OverviewInfo"]') trows = driver.find_elements_by_xpath('//*[@id="OverviewInfo"]/tr') print("the tbody exist") print("total rows is : ") print(len(trows)) totalList = [] for i in range(1,len(trows)+1): # col1= driver.find_element_by_xpath('//*[@id="OverviewInfo"]/tr[i]/td[1]') # print("find col1") col2= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[2]') # print("find col2") col3= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[3]') # print("find col3") col4= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[4]') # print("find col4") col5= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[5]') col6= driver.find_element_by_xpath('//*[@id="OverviewInfo"]//tr['+str(i)+']/td[6]') # print("find col5") print( col2.text , '\t' ,col3.text , '\t' , col4.text , '\t' , col5.text, '\t',col6.text) singelRecord = {'SIM': col2.text, 'Port Status': col3.text, 'Phone Number':col4.text, 'Last matched Balance':col5.text,'Calculated Balance': col6.text} # to_json= json.dumps(singelRecord) # print(to_json) totalList.append(singelRecord) to_json= json.dumps(totalList) print(to_json) except NoSuchElementException: print("Element does not exist") driver.close()
flutter 用 showmenu函数显示弹出菜单并设置背景颜色
mac 怎么用wireshark抓 flutter web开发网页的包-chrome
wireshark 版本 Version 3.4.5
mac。 版本 11.3.1
在。/users/mac/documents 新建一个文件 权限777 , sslkeylog.log
wireshark 这样设置
命令行执行 : /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome –user-data-dir=/tmp/chrome –ssl-key-log-file=/users/mac/documents/sslkeylog.log
会启动一个新的chrome。然后打开wireshark 就能抓包到这个chrome 的http和 https 包
但是。flutter 启动web的方法是 :
flutter run -d chrome
这种方法怎么带参数 , 比如像这样:
flutter-web-admin-dashboard-ecommerce-main % flutter run -d chrome –chrome-args=”–user-data-dir=/tmp/chrome –ssl-key-log-file=/users/mac/documents/sslkeylog.log”
这个网页提出的同样的问题: https://github.com/dart-lang/webdev/issues/1080
解决方法是: 先用flutter run -d chrome 运行,然后用 这个命令打开的chrome
/Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome –user-data-dir=/tmp/chrome –ssl-key-log-file=/users/mac/documents/sslkeylog.log
打开flutter 网页的地址
flutter how to add getx package to exist project
- 先安装package
get: ^4.1.4
- 运行
flutter pub get -v
get package
- 执行
flutter pub global activate get_cli
启动 cli 工具
- 执行
get -v
看是否显示getx的版本 , 能正确显示版本说明到此为止正确
- 比如现有的目录是这样的 /lib/widgets/layout/sms.dart
想给sms.dart文件加一个controller , 可以执行 :
get create controller:bulksms on widgets/layoutget
以上命令来自于 :https://github.com/jonataslaw/get_cli
flutter 怎么让datatable 也能自适应宽度
SizedBox.expand
results in the DataTable
taking an infinite height which the SingleChildScrollView
won’t like. Since you only want to span the width of the parent, you can use a LayoutBuilder
to get the size of the parent you care about and then wrap the DataTable
in a ConstrainedBox
.
Widget build(BuildContext context) {
return Scaffold(
body: LayoutBuilder(
builder: (context, constraints) => SingleChildScrollView(
child: Column(
children: [
const Text('My Text'),
Container(
alignment: Alignment.topLeft,
child: SingleChildScrollView(
scrollDirection: Axis.horizontal,
child: ConstrainedBox(
constraints: BoxConstraints(minWidth: constraints.minWidth),
child: DataTable(columns: [], rows: []),
),
),
),
],
),
),
),
);
}