## Empirical Bernstein Confidence Bound

** Published:**

A small note for a useful conclusion for giving concentration bound.

** Published:**

A small note for a useful conclusion for giving concentration bound.

** Published:**

The following note is inspired by the discussion on properties of linear kernel functions. Though they’re not supposed to perform well when we use SVGD, they could provide exact estimates for some functions including the mean and variance. Three kinds of kernel functions, constant kernel, linear kernel and polynomial kernel respectively, are explored here to see what kinds of functions they could provide exact estimates for and to see how well particles could approximate the target distribution.

** Published:**

First post of the year :-) Following are two talks I gave in the seminar with topic machine learning.

** Published:**

Reading notes for paper *Stein variational gradient descent: A general purpose bayesian inference algorithm.* (Liu & Wang, 2016) including introduction to Stein’s identity, Stein discrepancy and finally Stein variational gradient descent.

** Published:**

Short notes of five papers I read these days on applications of Stein’s method including wild variational inference, reinforcement learning and sampling. Most paper are from the project Stein’s method for practical machine learning that I am quite interested in.

** Published:**

First post of the year :-) Following are two talks I gave in the seminar with topic machine learning.

** Published:**

A small note for a useful conclusion for giving concentration bound.

** Published:**

Short notes of five papers I read these days on applications of Stein’s method including wild variational inference, reinforcement learning and sampling. Most paper are from the project Stein’s method for practical machine learning that I am quite interested in.

** Published:**

The following note is inspired by the discussion on properties of linear kernel functions. Though they’re not supposed to perform well when we use SVGD, they could provide exact estimates for some functions including the mean and variance. Three kinds of kernel functions, constant kernel, linear kernel and polynomial kernel respectively, are explored here to see what kinds of functions they could provide exact estimates for and to see how well particles could approximate the target distribution.

** Published:**

Reading notes for paper *Stein variational gradient descent: A general purpose bayesian inference algorithm.* (Liu & Wang, 2016) including introduction to Stein’s identity, Stein discrepancy and finally Stein variational gradient descent.

** Published:**

Short notes of five papers I read these days on applications of Stein’s method including wild variational inference, reinforcement learning and sampling. Most paper are from the project Stein’s method for practical machine learning that I am quite interested in.

** Published:**

The following note is inspired by the discussion on properties of linear kernel functions. Though they’re not supposed to perform well when we use SVGD, they could provide exact estimates for some functions including the mean and variance. Three kinds of kernel functions, constant kernel, linear kernel and polynomial kernel respectively, are explored here to see what kinds of functions they could provide exact estimates for and to see how well particles could approximate the target distribution.