
From Math to Code: Building GAM with Penalty Functions From Scratch
Enjoyed learning penalized GAM math. Built penalty matrices, optimized λ using GCV, and implement our own GAM function. Confusing? Yes! Rewarding? Oh yes!
Enjoyed learning penalized GAM math. Built penalty matrices, optimized λ using GCV, and implement our own GAM function. Confusing? Yes! Rewarding? Oh yes!
I finally understood B-splines by working through the Cox-deBoor algorithm step-by-step, discovering they’re just weighted combinations of basis functions that make non-linear regression linear. What surprised me is going through Bayesian statistics really helped me understand the engine behind the model! Will try this again in the future!
We learnt to derive the Newton-Raphson algorithm from Taylor series approximation and implements it for logistic regression in R. We’ll show how the second-order Taylor expansion leads to the Newton-Raphson update formula, then compare individual parameter updates versus using the full Fisher Information matrix for faster convergence
Refreshed my rusty calculus skills lately! 🤓 Finally understand what happens during complete separation and why those coefficient SE get so extreme. The math behind maximum likelihood estimation makes more sense now! Chain rule, quotient rule, matrix inversion are crucial!
In my simulations of Response Adaptive Randomization, I discovered it performs comparably to fixed 50-50 allocation in identifying treatment effects. The adaptive approach does appear to work! However, with only 10 trials, I’ve merely scratched the surface. Important limitations exist - temporal bias risks, statistical inefficiency, and complex multiplicity adjustments in Bayesian frameworks.