In this assignment, we’ll learn the basics of analyzing/thinking about alternative data.
What to submit
1.What to submit: A writeup and code. The writeup may be an .ipynb file with embedded output + writeup.
2.You must use code. Python, R, Julia are acceptable. NO VBA, Excel etc.
3.
You need data in the common_goods database, which I’ve stripped down to be simpler versions of RavenPack and Tiingo. Note that I will not put the data in MySQL. Should be available in StarRocks or Clickhouse, and probably Parquet.
Questions
1.Describe the Tiingo dataset and Ravenpack datastes along the lines of
When they began in business
Tiingo Coverage universes
Fields / metrics
Are there any advantages to Tiingo?
Optional: licenses – how do they license their data? To make it easier, I crossed this out but still worthwhile thinking about
2.What is the lag between news and RavenPack when the news breaks out versus when observable by a fund? (For RP you can just look up the paper https://knowledge.wharton.upenn.edu/article/high-frequency-trading-profiting-news/)
What about Tiingo? For the former, you have to just read. For the latter you can calculate it from crawl versus publish date. common_goods.tiingo_news
3.Given the paper we discussed in class, with Tiingo can you implement the “News Momentum” paper?
4.How many publishers/websites does RavenPack cover? What about Tiingo? For RavenPack I am referring to the SOURCE_NAME, for Tiingo I am referring to the URL top level domain.
Calculate the top 15 publishers by article count and point out one in Tiingo’s top versus RavenPack, within the year 2021
common_goods.rp_2021 (year 2021 data)
tingle.news (use the publishdate)
5.Link RavenPack and Tiingo to PERMNO. Then for December 2021, rank stocks based on publishedDate/TIMESTAMP_UTC. What stocks (common shares e.g. HEXCD 10,11) are the newsy-est (i.e. highest count of news) in both datasets?
If you did a rank correlation, imputing 0s for missing, what is the correlation?
How many stocks are in the same top 10% of article count?
common_goods.count_tiingo_publishdate[ CREATE TABLE class_2024.count_tiingo_crawl ENGINE = MergeTree ORDER BY (permno, month)
SETTINGS allow_nullable_key = 1 AS
SELECT permno, toStartOfMonth(date(left(crawlDate, 10))+7) AS month,
toInt32(count()) AS n, comnam
FROM tingle.news
INNER JOIN crsp_202401.dsenames ON lower(news.ticker) = lower(dsenames.ticker)
WHERE (left(crawlDate, 10) >= namedt) AND (left(crawlDate, 10) <= nameendt)
GROUP BY ALL]
common_goods.count_rp_monthly[ create table class_2024.count_rp Engine=MergeTree ORDER BY (permno, month) SETTINGS allow_nullable_key = 1 AS select permno,toInt32(count()) as n,comnam,
toStartOfMonth(date(left(TIMESTAMP_UTC,10))+7) as month
from rpnew.full
inner join (select RP_ENTITY_ID,DATA_VALUE as cusip from rpnew.entity_mapping_full_edition where RANGE_END='' or RANGE_END>='2020-12-31' and DATA_TYPE='CUSIP') as mapping
on mapping.RP_ENTITY_ID=full.RP_ENTITY_ID
inner join crsp_202401.dsenames
on left(lower(mapping.cusip),8)=left(lower(dsenames.cusip),8)
where left(TIMESTAMP_UTC,10) between namedt and nameendt
group by all]
6.Suppose you run a hedge fund and you are considering making your entire hedge fund based on news-based strategies, and you would make 3% extra annualized for any strategy using RavenPack news. Suppose RavenPack costs $100,000 per year. What is the amount of extra AUM you would need to make it worthwhile, assuming no other considerations, under a typical 20% profit mandate?
7.Okay, now suppose you run a multistrat. How does your answer depend on correlations between news and other data sources? What happens if you have data sources Google Searches or Bloomberg searches for a ticker, which capture similarly, investor attention? Does this increase or decrease the willingness to pay? In the alpha/beta framework, how might that show up?
8.RavenPack’s NLP methods are “old school” … given its HFT clientele, should they adopt GPT4? Make a good argument for or against. What types of advantages might you get from using GPT4?
9.What types of disadvantages or concerns might there be with back-testing with GPT4? Hint: Consider its training set.
10.RavenPack has a new product called RavenPack Edge launched September 2021. What’s wrong with me going back in time to back-test with RavenPack Edge back to 2005?
https://conference.nber.org/confer/2013/MMf13/von_Beschwitz_Keim_Massa.pdf
11.Extra credit for portfolio sorting (4 points): Using the data provided, took at news attention (log articles) at t-1. First sort on momentum. Then sort on news (2 bins). Before doing any sorting, do prc >=5 and mcap bin >=3 or above. How does an attention portfolio with high RavenPack attention look versus low attention? How about Tiingo? Why does high or low attention predict stronger momentum?
12.Extra credit (2 points): What are some metrics you could create that are not currently present in the RavenPack dataset, or which are subpar and can be improved?
13.Extra credit for portfolio sorting (4 points): Come up with another strategy that, based on some attention or behavioral finance theory (or frankly, any other) and show me how news counts from RavenPack can be used to improve the strategy. Run said strategy. You can look at the database opensource2023 which has about 200+ factors implemented already.
请加QQ:99515681 邮箱:99515681@qq.com WX:codinghelp
版权声明
广深在线内容如无特殊说明,内容均来自于用户投稿,如遇版权或内容投诉,请联系我们。