Multi - Armed Bandits - podcast episode cover

Multi - Armed Bandits

Mar 07, 201611 min
--:--
--:--
Listen in podcast apps:

Episode description

Multi-armed bandits: how to take your randomized experiment and make it harder better faster stronger. Basically, a multi-armed bandit experiment allows you to optimize for both learning and making use of your knowledge at the same time. It's what the pros (like Google Analytics) use, and it's got a great name, so... winner! Relevant link: https://support.google.com/analytics/answer/2844870?hl=en
Multi - Armed Bandits | Linear Digressions podcast - Listen or read transcript on Metacast