Different to MLE, Bayes Estimation treats the parameter $\theta$ as a random variable and estimate the $\theta$ through it’s posterior density. $X$ is the data we observed $P(X|\theta)$ is the density function of $X$ with parameter $\theta$ and $\pi(\theta)$ is the prior density function of $\theta$, then we can have
$$ P(\theta|X) = \frac{P(X|\theta)\pi(\theta)}{P(X)} \propto P(X|\theta)\pi(\theta) $$
Usually we assume the $\pi(\theta)$ as a gaussian distribution. And with Bayes Estimation, we can get different extimation methods.
Point Estimation
$$\hat{\theta} = E[P(\theta|X)] = \int P(\theta|X)\pi(\theta) \mathrm{d}\theta$$
MAP (Maximum a Posterior)
$$\hat{\theta} = argmax_{\theta}P(\theta|X) = argmax_{\theta}P(X|\theta)\pi(\theta)$$