Taylor Series Approximation To Newton Raphson Algorithm - A note for myself of the proof

We learnt to derive the Newton-Raphson algorithm from Taylor series approximation and implements it for logistic regression in R. We’ll show how the second-order Taylor expansion leads to the Newton-Raphson update formula, then compare individual parameter updates versus using the full Fisher Information matrix for faster convergence

From Complete Separation To Maximum Likelihood Estimation in Logistic Regresion: A Note To Myself

Refreshed my rusty calculus skills lately! 🤓 Finally understand what happens during complete separation and why those coefficient SE get so extreme. The math behind maximum likelihood estimation makes more sense now! Chain rule, quotient rule, matrix inversion are crucial!

Simulating A Simple Response Adaptive Randomization - I Have To See It To Believe It

In my simulations of Response Adaptive Randomization, I discovered it performs comparably to fixed 50-50 allocation in identifying treatment effects. The adaptive approach does appear to work! However, with only 10 trials, I’ve merely scratched the surface. Important limitations exist - temporal bias risks, statistical inefficiency, and complex multiplicity adjustments in Bayesian frameworks.