superlearner

Testing Super Learner’s Coverage - A Note To Myself

Testing Super Learner with TMLE showed some interesting patterns 🤔 XGBoost + random forest only hit ~54% coverage, but tuned xgboost + GLM reached ~90%. Seems like pairing flexible learners with stable (even misspecified) models helps? Need to explore this more with different setups 📊

My Messy Notes on Building a Super Learner: Peeking Under The Hood of NNLS

📚 Tried building Super Learner from scratch to understand what’s happening under the hood. Walked through the NNLS algorithm step-by-step—turns out ensembling models may beat solo models! Our homegrown version? Surprisingly close to nnls package results ❤️ But, does it really work in real life? 🤷‍♂️