订单斜率系列因子
由bq7zuymm创建,最终由bq7zuymm 被浏览 49 用户
正如标题所示,这个系列主要介绍订单斜率系列因子。需要用到数据流的表格名为cn_stock_level2_snapshot
,需要开通数据流的请联系小Q。
时间加权订单斜率
数据定义
我们使用的数据格式如下:
日期 | 买一价 | 卖一价 | 买一量 | 卖一量 | 成交量 |
---|---|---|---|---|---|
t1 | b1 | a1 | bv1 | av1 | v1 |
t2 | b2 | a2 | bv2 | av2 | v2 |
… | … | … | … | … | … |
tn | bn | an | bvn | avn | vn |
因此我们可以给出因子表达式:
这里的求和是对分钟内的数据点进行求和, n为分钟内的有效数据点的个数(有效快照数),我们定义,当前成交量与上一快照成交量相等时,当前快照数据点无效; 还有分母为0的数据点也是无效的。分钟因子加工方式请参考分钟盘口平均委买量。
因子代码加工
导库、订阅、取数三件套:
import dai
import time
import numpy as np
# 订阅
instruments = "('000001.SZ')"
dai.pull_data_to_table(datasource='cn_stock_level2_snapshot', table_name='stock_table', overwrite=True, lookback_time=72*60*60)
# 取数
sql = f"""
SELECT date_trunc('minute', to_timestamp(datetime * 1.0 / 1000 + 8 * 60 * 60)) as date, instrument, datetime,
ask_price1, bid_price1, ask_volume1, bid_volume1, volume,
LAG(volume, 1) OVER (PARTITION BY instrument ORDER BY datetime) AS lag_volume
FROM stock_table
WHERE instrument in {instruments}
"""
engine = dai.stream_factor(sql, 'test', True, 'datetime ASC')
我们将数据冷启动与因子加工放在一起:
while True:
df = engine.df()
if len(df) == 0:
continue
df['quoteslope'] = np.log(df['ask_price1'] / df['bid_price1']) / np.log(df['ask_volume1'] / df['bid_volume1']+1)
df = df.dropna()
# 进一步剔除成交量增量为0的数据点
df['quoteslope'] = np.where(df['volume']==df['lag_volume'], np.nan, df['quoteslope'])
df = df.dropna()
# 分钟因子加工
sql = """
SELECT date, instrument, AVG(quoteslope) AS tavg_quoteslope
FROM df
GROUP BY date, instrument
ORDER BY date
"""
data = dai.query(sql).df()
结果展示
成交量加权订单斜率
数据定义
这次我们依旧用时间加权订单斜率定义的数学符号来给出成交量加权订单斜率的计算公式。
首先我们在加工成交量加权净委买比例时定义过成交量加权方式:
值得注意的是这里加权与时间加权的权重是不一致的, 选用分钟内时间加权时, 权重都是同一个常数:1/n。n为分钟内的快照数, 但是成交量加权实际是成交量的差额作为权重进行加权:
因子加工代码
import dai
import time
import numpy as np
instruments = "('000002.SZ')"
dai.pull_data_to_table(datasource='cn_stock_level2_snapshot', table_name='stock_table', overwrite=True, lookback_time=72*60*60)
sql = f"""
SELECT date_trunc('minute', to_timestamp(datetime * 1.0 / 1000)) as date, instrument, datetime,
ask_price1, bid_price1, ask_volume1, bid_volume1, volume,
LAG(volume, 1) OVER (PARTITION BY instrument ORDER BY datetime) AS lag_volume
FROM stock_table
WHERE instrument in {instruments}
"""
engine = dai.stream_factor(sql, 'test', True, 'datetime ASC')
# 实时因子计算
while True:
time.sleep(60)
data = engine.df()
if len(data)==0:
continue
# 先计算分子
data['up'] = (data['volume']-data['lag_volume']) * (np.log(data['ask_price1'])-np.log(data['bid_price1'])) / np.log(data['ask_volume1']/data['bid_volume1']+1)
# 再计算分母
data['down'] = data['volume']-data['lag_volume']
# 计算分钟平均
sql = """
SELECT date, instrument, SUM(up) / SUM(down) AS vavg_quoteslope
FROM data
GROUP BY date, instrument
"""
stream_data = dai.query(sql).df()
结果展示
成交价加权订单斜率
数据定义
在加工时间加权的订单斜率时我们用到了买一价、卖一价、买一量、卖一量、成交量, 加工成交价加权订单斜率时我们需要用到成交额来计算成交均价, 所以我们添加一列成交额序列:
日期 | 买一价 | 卖一价 | 买一量 | 卖一量 | 成交量 | 成交额 |
---|---|---|---|---|---|---|
t1 | b1 | a1 | bv1 | av1 | v1 | amount1 |
t2 | b2 | a2 | bv2 | av2 | v2 | amount2 |
… | … | … | … | … | … | …… |
tn | bn | an | bvn | avn | vn | amountn |
我们直接上公式:
我们成交价加权净委买比例这篇帖子中定义过它的权重,这里使用的权重与其一致,这里不再赘述。
因子加工代码
import dai
import time
import numpy as np
instruments = "('000002.SZ')"
dai.pull_data_to_table(datasource='cn_stock_level2_snapshot', table_name='stock_table', overwrite=True, lookback_time=72*60*60)
sql = f"""
SELECT date_trunc('minute', to_timestamp(datetime * 1.0 / 1000)) as date, instrument, datetime,
ask_price1, bid_price1, ask_volume1, bid_volume1, volume,
LAG(volume, 1) OVER (PARTITION BY instrument ORDER BY datetime) AS lag_volume,
amount,
LAG(amount, 1) OVER (PARTITION BY instrument ORDER BY datetime) AS lag_amount
FROM stock_table
WHERE instrument in {instruments}
"""
engine = dai.stream_factor(sql, 'test', True, 'datetime ASC')
# 流数据因子加工
while True:
data = engine.df()
if len(data) ==0 :
continue
data['up'] = (np.log(data['ask_price1'])-np.log(data['bid_price1'])) / np.log(data['ask_volume1']/data['bid_volume1']+1) * ((data['amount']-data['lag_amount'])/(data['volume']-data['lag_volume']))
data['down'] = (data['amount']-data['lag_amount'])/(data['volume']-data['lag_volume'])
sql = """
SELECT date, instrument, SUM(up)/SUM(down) AS pavg_quoteslope
FROM data
GROUP BY date, instrument
"""
stream_data = dai.query(sql).df().dropna()
time.sleep(60)